Test Report: Docker_Linux 14269

                    
                      ab7bb61b313d0ba57acd833ecb833795c1bc5389:2022-06-02:24239
                    
                

Test fail (17/278)

x
+
TestAddons/parallel/Registry (212.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:280: registry stabilized in 10.061377ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-twtpk" [164ddabb-ed64-4c3b-b6d0-febaab2c0a84] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.009140655s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-bglxc" [aa97146a-3619-43a6-b652-34d2cff4c61a] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009162659s
addons_test.go:290: (dbg) Run:  kubectl --context addons-20220602171222-283122 delete po -l run=registry-test --now
addons_test.go:295: (dbg) Run:  kubectl --context addons-20220602171222-283122 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: (dbg) Non-zero exit: kubectl --context addons-20220602171222-283122 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (37.762038115s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to use a TTY - input is not a terminal or the right kind of file
	If you don't see a command prompt, try pressing enter.
	Error attaching, falling back to logs: 
	pod default/registry-test terminated (Error)

                                                
                                                
** /stderr **
addons_test.go:297: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-20220602171222-283122 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:301: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220602171222-283122 ip
2022/06/02 17:14:43 [DEBUG] GET http://192.168.49.2:5000
2022/06/02 17:14:43 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:14:43 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2022/06/02 17:14:44 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:14:44 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2022/06/02 17:14:46 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:14:46 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2022/06/02 17:14:50 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:14:50 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2022/06/02 17:14:58 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:14:59 [DEBUG] GET http://192.168.49.2:5000
2022/06/02 17:14:59 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:14:59 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2022/06/02 17:15:00 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:15:00 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2022/06/02 17:15:02 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:15:02 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2022/06/02 17:15:06 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:15:06 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2022/06/02 17:15:14 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:15:15 [DEBUG] GET http://192.168.49.2:5000
2022/06/02 17:15:15 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:15:15 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2022/06/02 17:15:16 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:15:16 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2022/06/02 17:15:18 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:15:18 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2022/06/02 17:15:22 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:15:22 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2022/06/02 17:15:30 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:15:31 [DEBUG] GET http://192.168.49.2:5000
2022/06/02 17:15:31 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:15:31 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2022/06/02 17:15:32 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:15:32 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2022/06/02 17:15:34 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:15:34 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2022/06/02 17:15:38 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:15:38 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2022/06/02 17:15:46 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:15:48 [DEBUG] GET http://192.168.49.2:5000
2022/06/02 17:15:48 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:15:48 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2022/06/02 17:15:49 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:15:49 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2022/06/02 17:15:51 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:15:51 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2022/06/02 17:15:55 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:15:55 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2022/06/02 17:16:03 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:16:06 [DEBUG] GET http://192.168.49.2:5000
2022/06/02 17:16:06 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:16:06 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2022/06/02 17:16:07 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:16:07 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2022/06/02 17:16:09 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:16:09 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2022/06/02 17:16:13 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:16:13 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2022/06/02 17:16:21 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:16:24 [DEBUG] GET http://192.168.49.2:5000
2022/06/02 17:16:24 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:16:24 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2022/06/02 17:16:25 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:16:25 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2022/06/02 17:16:27 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:16:27 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2022/06/02 17:16:31 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:16:31 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2022/06/02 17:16:39 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:16:47 [DEBUG] GET http://192.168.49.2:5000
2022/06/02 17:16:47 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:16:47 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2022/06/02 17:16:48 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:16:48 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2022/06/02 17:16:50 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:16:50 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2022/06/02 17:16:54 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:16:54 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2022/06/02 17:17:02 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:17:10 [DEBUG] GET http://192.168.49.2:5000
2022/06/02 17:17:10 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:17:10 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2022/06/02 17:17:11 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:17:11 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2022/06/02 17:17:13 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:17:13 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2022/06/02 17:17:17 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2022/06/02 17:17:17 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2022/06/02 17:17:25 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
addons_test.go:335: failed to check external access to http://192.168.49.2:5000: GET http://192.168.49.2:5000 giving up after 5 attempt(s): Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
addons_test.go:338: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220602171222-283122 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-20220602171222-283122
helpers_test.go:235: (dbg) docker inspect addons-20220602171222-283122:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "45b83224ffe1a9aa9a8f38b7a42321844d5292d0ca016470edada70f5442c3fd",
	        "Created": "2022-06-02T17:12:37.360541484Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 284832,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T17:12:37.72234853Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/45b83224ffe1a9aa9a8f38b7a42321844d5292d0ca016470edada70f5442c3fd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/45b83224ffe1a9aa9a8f38b7a42321844d5292d0ca016470edada70f5442c3fd/hostname",
	        "HostsPath": "/var/lib/docker/containers/45b83224ffe1a9aa9a8f38b7a42321844d5292d0ca016470edada70f5442c3fd/hosts",
	        "LogPath": "/var/lib/docker/containers/45b83224ffe1a9aa9a8f38b7a42321844d5292d0ca016470edada70f5442c3fd/45b83224ffe1a9aa9a8f38b7a42321844d5292d0ca016470edada70f5442c3fd-json.log",
	        "Name": "/addons-20220602171222-283122",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-20220602171222-283122:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20220602171222-283122",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5ce208094f9a1ba49e500b71f185dd3c908e6f1d128794467ece7797facc768a-init/diff:/var/lib/docker/overlay2/9517781e97ae29bbe1cf38381a601e806342524da1c5d5dc15ba54ec17f204b6/diff:/var/lib/docker/overlay2/28e14d8724bd82b9b6adde55b66e6d1f1be4fa6c3379f2f44ca1138dd648175a/diff:/var/lib/docker/overlay2/24589dfb9ddc941e7cb22b5e38dec69d1af0a29a330d344152ca7f26010354a8/diff:/var/lib/docker/overlay2/66c45d6273a3507b6e0b6b0753ec4fc82f07160fd115626e24a42bda27bbba86/diff:/var/lib/docker/overlay2/a6f0c661bd178a2a0fef034493a114067e8952b8516e69aff1e1e94a75918bbc/diff:/var/lib/docker/overlay2/5b40a045506372be84bfb01d05b068bc4cc926c5eb9e868226c4c386e8a331c7/diff:/var/lib/docker/overlay2/dbf9a9c6e59628984c95ea93ff4ada66c53b22ca3c2f910f70efba895cf40718/diff:/var/lib/docker/overlay2/5eea619393ee989a35a37a13b0633685af91729e1b74f6e73fd94b63c9d7d1fa/diff:/var/lib/docker/overlay2/24de6fbece5386c3658fa3e2c01723e6c1c24fb837d53b3d5a8c75b64ce2cd06/diff:/var/lib/docker/overlay2/04593e
f7d3bf72ee8b7439b0326af5b9818930e0dd48c35dba97e3b4ae39f4ee/diff:/var/lib/docker/overlay2/09310697cecb557085bb4d5dfc810a82fc533759ad98179263b2fc94153a64fa/diff:/var/lib/docker/overlay2/ee75c420f0615a19a5d44cfe042bdca5a2cf0f1c541168ba424140cf2aa7f98d/diff:/var/lib/docker/overlay2/fc8643a3e2060ab0365cd1227ab9ebbd5a573e6ebf8379bc6b64a756a1f66562/diff:/var/lib/docker/overlay2/0f8571cab87b35114fab495cbde9af8ac7e58be99412da82a79c90239e2227e1/diff:/var/lib/docker/overlay2/e1a5ae079e4f566081e5e786955257c23809c8719ea6b4593c8c91ac00380379/diff:/var/lib/docker/overlay2/373565e63dec12c85906e80b4a030f7d852992a756749be9424ac17802d589c0/diff:/var/lib/docker/overlay2/b886b42aa5e9f06b4381a6f58d55338774a2d02af2e91e3de156aa4bca4e4be0/diff:/var/lib/docker/overlay2/dee534a744ce54d08ca752b7f64e1772c063d4ba1e008c5a9773188bb009c377/diff:/var/lib/docker/overlay2/c8cf6667f29ffc5417675e4bd8eee6a91468de13292e3cb6ae2c70d3ad56d5fd/diff:/var/lib/docker/overlay2/072b57a4228c3928bdb27162809b102b485bb94c5f726c9a3e9892267ed30839/diff:/var/lib/d
ocker/overlay2/491d5b51bf6de1cfc4eef288c454f54e3b424e3b83fd9dc216c35891c7923f55/diff:/var/lib/docker/overlay2/78f167c2cc286e4058ef09d5a719437afddde854aac41b8d054567865fa61024/diff:/var/lib/docker/overlay2/10763b3ba26983ed9426ccca08bfceb7037d95c69827847e2ff755b4f2c5a4ab/diff:/var/lib/docker/overlay2/e559098efb21164d96bd0aee50fee9f48310df68887c5c884f4ba3f3fc895260/diff:/var/lib/docker/overlay2/555ef1505fe2b1ab4244f073e9212f269d70d8cd6ca0cd517d21cc80cd25b4c1/diff:/var/lib/docker/overlay2/0201da7c907a1f8f564cda48c3f3a403e484d901739d2584f1343a7907685c5e/diff:/var/lib/docker/overlay2/0abae5d84f8497a5114349421415431c43a34015ad98f6c9c3140143d2f1f383/diff:/var/lib/docker/overlay2/80c37982eef2dc00bd08f208551df377570134af84008156f6a500855fd2ba10/diff:/var/lib/docker/overlay2/cc38481bb11be97cd54c0032709337a491580c2f0ae6d3f7eec4c3616f8e3126/diff:/var/lib/docker/overlay2/dda978aeb4c7317bafbc077229c4417a4d8e7658d9604803c401448b471eab5e/diff:/var/lib/docker/overlay2/7c230ae0970689e2932cdaadc9e3f86ec8215fbf384c9eb59cfb958daf7
3197e/diff:/var/lib/docker/overlay2/145db51b169211e46c3bbab5ca998b2a019a4a813801acc2dae9b2dc09bc2ecc/diff:/var/lib/docker/overlay2/e604163672a013549c59bc042d1b932a3dad9e104d7079fbbb3ca24c29351c0d/diff:/var/lib/docker/overlay2/5a304d8680fd3fb594754173a70646923e2715ed21c98b8c43e1f1ef2d7abd6a/diff:/var/lib/docker/overlay2/b77f00351ddd70be2f3d9dd622b78e1a2937c4ba71585b357fef88d285eb762e/diff:/var/lib/docker/overlay2/196b6ae042b9b08748253c16f5e2b56d7f9a25077008d3d447cbc75447ed5e2a/diff:/var/lib/docker/overlay2/8c7c3fbb0bed8f7440d11104257806c70d16fd7d432311fc208d5012a439e5a1/diff:/var/lib/docker/overlay2/10604aa3b8ed5698c9fa8f55e00b9a9dc27d6b99135255d6c756fc080e615fc6/diff:/var/lib/docker/overlay2/c6602a17e9b0ab1d84297109204b48e60f293952fbc1feaf362895ec86860ead/diff:/var/lib/docker/overlay2/3828c5e50db9be2b4bc30fce040392ea735da5eaa819ad0848cb4fb79fc367da/diff:/var/lib/docker/overlay2/708816f20892c0e4b94cbe8e80b169fff54ffd870bcde395bd8163dd03b25d0f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5ce208094f9a1ba49e500b71f185dd3c908e6f1d128794467ece7797facc768a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5ce208094f9a1ba49e500b71f185dd3c908e6f1d128794467ece7797facc768a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5ce208094f9a1ba49e500b71f185dd3c908e6f1d128794467ece7797facc768a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-20220602171222-283122",
	                "Source": "/var/lib/docker/volumes/addons-20220602171222-283122/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20220602171222-283122",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20220602171222-283122",
	                "name.minikube.sigs.k8s.io": "addons-20220602171222-283122",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "833243602ff83cb24e1e144199fa59c0f814481efd63f0521eae397d80d6fcac",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49447"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49446"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49443"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49444"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/833243602ff8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20220602171222-283122": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "45b83224ffe1",
	                        "addons-20220602171222-283122"
	                    ],
	                    "NetworkID": "9e9c034cfe0b0e789a60f190d97895decfa7a82f9cca8bbb0a11dee48d6ab21e",
	                    "EndpointID": "ca23c3d41791a7b4e40692349258c60852f22878c2b746686230d37912811a8f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-20220602171222-283122 -n addons-20220602171222-283122
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220602171222-283122 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-20220602171222-283122 logs -n 25: (1.170199497s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                 Args                  |                Profile                |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | --all                                 | download-only-20220602171206-283122   | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:12 UTC | 02 Jun 22 17:12 UTC |
	| delete  | -p                                    | download-only-20220602171206-283122   | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:12 UTC | 02 Jun 22 17:12 UTC |
	|         | download-only-20220602171206-283122   |                                       |         |                |                     |                     |
	| delete  | -p                                    | download-only-20220602171206-283122   | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:12 UTC | 02 Jun 22 17:12 UTC |
	|         | download-only-20220602171206-283122   |                                       |         |                |                     |                     |
	| delete  | -p                                    | download-docker-20220602171218-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:12 UTC | 02 Jun 22 17:12 UTC |
	|         | download-docker-20220602171218-283122 |                                       |         |                |                     |                     |
	| delete  | -p                                    | binary-mirror-20220602171221-283122   | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:12 UTC | 02 Jun 22 17:12 UTC |
	|         | binary-mirror-20220602171221-283122   |                                       |         |                |                     |                     |
	| start   | -p                                    | addons-20220602171222-283122          | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:12 UTC | 02 Jun 22 17:13 UTC |
	|         | addons-20220602171222-283122          |                                       |         |                |                     |                     |
	|         | --wait=true --memory=4000             |                                       |         |                |                     |                     |
	|         | --alsologtostderr                     |                                       |         |                |                     |                     |
	|         | --addons=registry                     |                                       |         |                |                     |                     |
	|         | --addons=metrics-server               |                                       |         |                |                     |                     |
	|         | --addons=volumesnapshots              |                                       |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver          |                                       |         |                |                     |                     |
	|         | --addons=gcp-auth                     |                                       |         |                |                     |                     |
	|         | --driver=docker                       |                                       |         |                |                     |                     |
	|         | --container-runtime=docker            |                                       |         |                |                     |                     |
	|         | --addons=ingress                      |                                       |         |                |                     |                     |
	|         | --addons=ingress-dns                  |                                       |         |                |                     |                     |
	|         | --addons=helm-tiller                  |                                       |         |                |                     |                     |
	| addons  | addons-20220602171222-283122          | addons-20220602171222-283122          | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:14 UTC | 02 Jun 22 17:14 UTC |
	|         | addons disable metrics-server         |                                       |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                                       |         |                |                     |                     |
	| addons  | addons-20220602171222-283122          | addons-20220602171222-283122          | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:14 UTC | 02 Jun 22 17:14 UTC |
	|         | addons disable helm-tiller            |                                       |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                                       |         |                |                     |                     |
	| ssh     | addons-20220602171222-283122          | addons-20220602171222-283122          | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:14 UTC | 02 Jun 22 17:14 UTC |
	|         | ssh curl -s http://127.0.0.1/         |                                       |         |                |                     |                     |
	|         | -H 'Host: nginx.example.com'          |                                       |         |                |                     |                     |
	| ip      | addons-20220602171222-283122          | addons-20220602171222-283122          | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:14 UTC | 02 Jun 22 17:14 UTC |
	|         | ip                                    |                                       |         |                |                     |                     |
	| addons  | addons-20220602171222-283122          | addons-20220602171222-283122          | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:14 UTC | 02 Jun 22 17:14 UTC |
	|         | addons disable ingress-dns            |                                       |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                                       |         |                |                     |                     |
	| addons  | addons-20220602171222-283122          | addons-20220602171222-283122          | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:14 UTC | 02 Jun 22 17:14 UTC |
	|         | addons disable ingress                |                                       |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                                       |         |                |                     |                     |
	| addons  | addons-20220602171222-283122          | addons-20220602171222-283122          | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:14 UTC | 02 Jun 22 17:14 UTC |
	|         | addons disable                        |                                       |         |                |                     |                     |
	|         | csi-hostpath-driver                   |                                       |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                                       |         |                |                     |                     |
	| addons  | addons-20220602171222-283122          | addons-20220602171222-283122          | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:14 UTC | 02 Jun 22 17:14 UTC |
	|         | addons disable volumesnapshots        |                                       |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                                       |         |                |                     |                     |
	| ip      | addons-20220602171222-283122          | addons-20220602171222-283122          | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:14 UTC | 02 Jun 22 17:14 UTC |
	|         | ip                                    |                                       |         |                |                     |                     |
	| addons  | addons-20220602171222-283122          | addons-20220602171222-283122          | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:17 UTC | 02 Jun 22 17:17 UTC |
	|         | addons disable registry               |                                       |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                                       |         |                |                     |                     |
	|---------|---------------------------------------|---------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 17:12:22
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 17:12:22.449638  284165 out.go:296] Setting OutFile to fd 1 ...
	I0602 17:12:22.449859  284165 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:12:22.449871  284165 out.go:309] Setting ErrFile to fd 2...
	I0602 17:12:22.449875  284165 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:12:22.449995  284165 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 17:12:22.450334  284165 out.go:303] Setting JSON to false
	I0602 17:12:22.451222  284165 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6896,"bootTime":1654183047,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0602 17:12:22.451304  284165 start.go:125] virtualization: kvm guest
	I0602 17:12:22.454193  284165 out.go:177] * [addons-20220602171222-283122] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0602 17:12:22.456167  284165 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 17:12:22.456076  284165 notify.go:193] Checking for updates...
	I0602 17:12:22.457896  284165 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 17:12:22.459800  284165 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 17:12:22.461500  284165 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 17:12:22.463219  284165 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0602 17:12:22.466579  284165 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 17:12:22.502769  284165 docker.go:137] docker version: linux-20.10.16
	I0602 17:12:22.502883  284165 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 17:12:22.607503  284165 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:71 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2022-06-02 17:12:22.530939961 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 17:12:22.607614  284165 docker.go:254] overlay module found
	I0602 17:12:22.610029  284165 out.go:177] * Using the docker driver based on user configuration
	I0602 17:12:22.611564  284165 start.go:284] selected driver: docker
	I0602 17:12:22.611589  284165 start.go:806] validating driver "docker" against <nil>
	I0602 17:12:22.611616  284165 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 17:12:22.612521  284165 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 17:12:22.720412  284165 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:71 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2022-06-02 17:12:22.641359709 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 17:12:22.720570  284165 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0602 17:12:22.720829  284165 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0602 17:12:22.723341  284165 out.go:177] * Using Docker driver with the root privilege
	I0602 17:12:22.725059  284165 cni.go:95] Creating CNI manager for ""
	I0602 17:12:22.725084  284165 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 17:12:22.725100  284165 start_flags.go:306] config:
	{Name:addons-20220602171222-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:addons-20220602171222-283122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 17:12:22.727034  284165 out.go:177] * Starting control plane node addons-20220602171222-283122 in cluster addons-20220602171222-283122
	I0602 17:12:22.728500  284165 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 17:12:22.730255  284165 out.go:177] * Pulling base image ...
	I0602 17:12:22.731915  284165 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 17:12:22.731977  284165 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 17:12:22.732000  284165 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 17:12:22.732012  284165 cache.go:57] Caching tarball of preloaded images
	I0602 17:12:22.732274  284165 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 17:12:22.732290  284165 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 17:12:22.732598  284165 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/config.json ...
	I0602 17:12:22.732635  284165 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/config.json: {Name:mk184741e99da13c118ff4c9e4b0735460748bbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:12:22.776977  284165 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 17:12:22.777029  284165 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 17:12:22.777050  284165 cache.go:206] Successfully downloaded all kic artifacts
	I0602 17:12:22.777123  284165 start.go:352] acquiring machines lock for addons-20220602171222-283122: {Name:mk795c79b5e96d99510858f1ceab0b808882ac45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 17:12:22.777314  284165 start.go:356] acquired machines lock for "addons-20220602171222-283122" in 146.58µs
	I0602 17:12:22.777346  284165 start.go:91] Provisioning new machine with config: &{Name:addons-20220602171222-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:addons-20220602171222-283122 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 17:12:22.777435  284165 start.go:131] createHost starting for "" (driver="docker")
	I0602 17:12:22.779958  284165 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0602 17:12:22.780247  284165 start.go:165] libmachine.API.Create for "addons-20220602171222-283122" (driver="docker")
	I0602 17:12:22.780320  284165 client.go:168] LocalClient.Create starting
	I0602 17:12:22.780480  284165 main.go:134] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem
	I0602 17:12:22.912762  284165 main.go:134] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem
	I0602 17:12:23.079525  284165 cli_runner.go:164] Run: docker network inspect addons-20220602171222-283122 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0602 17:12:23.110940  284165 cli_runner.go:211] docker network inspect addons-20220602171222-283122 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0602 17:12:23.111028  284165 network_create.go:272] running [docker network inspect addons-20220602171222-283122] to gather additional debugging logs...
	I0602 17:12:23.111049  284165 cli_runner.go:164] Run: docker network inspect addons-20220602171222-283122
	W0602 17:12:23.141877  284165 cli_runner.go:211] docker network inspect addons-20220602171222-283122 returned with exit code 1
	I0602 17:12:23.141913  284165 network_create.go:275] error running [docker network inspect addons-20220602171222-283122]: docker network inspect addons-20220602171222-283122: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20220602171222-283122
	I0602 17:12:23.141936  284165 network_create.go:277] output of [docker network inspect addons-20220602171222-283122]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20220602171222-283122
	
	** /stderr **
	I0602 17:12:23.142012  284165 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 17:12:23.172139  284165 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0000103b0] misses:0}
	I0602 17:12:23.172234  284165 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 17:12:23.172258  284165 network_create.go:115] attempt to create docker network addons-20220602171222-283122 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0602 17:12:23.172320  284165 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20220602171222-283122
	I0602 17:12:23.239664  284165 network_create.go:99] docker network addons-20220602171222-283122 192.168.49.0/24 created
	I0602 17:12:23.239734  284165 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20220602171222-283122" container
	I0602 17:12:23.239843  284165 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0602 17:12:23.270958  284165 cli_runner.go:164] Run: docker volume create addons-20220602171222-283122 --label name.minikube.sigs.k8s.io=addons-20220602171222-283122 --label created_by.minikube.sigs.k8s.io=true
	I0602 17:12:23.302886  284165 oci.go:103] Successfully created a docker volume addons-20220602171222-283122
	I0602 17:12:23.302995  284165 cli_runner.go:164] Run: docker run --rm --name addons-20220602171222-283122-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20220602171222-283122 --entrypoint /usr/bin/test -v addons-20220602171222-283122:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib
	I0602 17:12:30.549150  284165 cli_runner.go:217] Completed: docker run --rm --name addons-20220602171222-283122-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20220602171222-283122 --entrypoint /usr/bin/test -v addons-20220602171222-283122:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib: (7.246094365s)
	I0602 17:12:30.549194  284165 oci.go:107] Successfully prepared a docker volume addons-20220602171222-283122
	I0602 17:12:30.549271  284165 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 17:12:30.549304  284165 kic.go:179] Starting extracting preloaded images to volume ...
	I0602 17:12:30.549385  284165 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20220602171222-283122:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir
	I0602 17:12:37.223903  284165 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-20220602171222-283122:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir: (6.674441662s)
	I0602 17:12:37.223940  284165 kic.go:188] duration metric: took 6.674631 seconds to extract preloaded images to volume
	W0602 17:12:37.224101  284165 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0602 17:12:37.224220  284165 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0602 17:12:37.329774  284165 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20220602171222-283122 --name addons-20220602171222-283122 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20220602171222-283122 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20220602171222-283122 --network addons-20220602171222-283122 --ip 192.168.49.2 --volume addons-20220602171222-283122:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496
	I0602 17:12:37.732182  284165 cli_runner.go:164] Run: docker container inspect addons-20220602171222-283122 --format={{.State.Running}}
	I0602 17:12:37.767726  284165 cli_runner.go:164] Run: docker container inspect addons-20220602171222-283122 --format={{.State.Status}}
	I0602 17:12:37.800685  284165 cli_runner.go:164] Run: docker exec addons-20220602171222-283122 stat /var/lib/dpkg/alternatives/iptables
	I0602 17:12:37.864209  284165 oci.go:247] the created container "addons-20220602171222-283122" has a running status.
	I0602 17:12:37.864260  284165 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/addons-20220602171222-283122/id_rsa...
	I0602 17:12:37.977255  284165 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/addons-20220602171222-283122/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0602 17:12:38.070254  284165 cli_runner.go:164] Run: docker container inspect addons-20220602171222-283122 --format={{.State.Status}}
	I0602 17:12:38.107681  284165 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0602 17:12:38.107725  284165 kic_runner.go:114] Args: [docker exec --privileged addons-20220602171222-283122 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0602 17:12:38.197257  284165 cli_runner.go:164] Run: docker container inspect addons-20220602171222-283122 --format={{.State.Status}}
	I0602 17:12:38.230338  284165 machine.go:88] provisioning docker machine ...
	I0602 17:12:38.230378  284165 ubuntu.go:169] provisioning hostname "addons-20220602171222-283122"
	I0602 17:12:38.230478  284165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220602171222-283122
	I0602 17:12:38.264816  284165 main.go:134] libmachine: Using SSH client type: native
	I0602 17:12:38.265104  284165 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49447 <nil> <nil>}
	I0602 17:12:38.265138  284165 main.go:134] libmachine: About to run SSH command:
	sudo hostname addons-20220602171222-283122 && echo "addons-20220602171222-283122" | sudo tee /etc/hostname
	I0602 17:12:38.395061  284165 main.go:134] libmachine: SSH cmd err, output: <nil>: addons-20220602171222-283122
	
	I0602 17:12:38.395171  284165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220602171222-283122
	I0602 17:12:38.429898  284165 main.go:134] libmachine: Using SSH client type: native
	I0602 17:12:38.430087  284165 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49447 <nil> <nil>}
	I0602 17:12:38.430128  284165 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20220602171222-283122' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20220602171222-283122/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20220602171222-283122' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 17:12:38.545108  284165 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 17:12:38.545147  284165 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem
ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 17:12:38.545175  284165 ubuntu.go:177] setting up certificates
	I0602 17:12:38.545188  284165 provision.go:83] configureAuth start
	I0602 17:12:38.545257  284165 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20220602171222-283122
	I0602 17:12:38.577059  284165 provision.go:138] copyHostCerts
	I0602 17:12:38.577156  284165 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 17:12:38.577288  284165 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 17:12:38.577355  284165 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1679 bytes)
	I0602 17:12:38.577404  284165 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.addons-20220602171222-283122 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20220602171222-283122]
	I0602 17:12:38.878029  284165 provision.go:172] copyRemoteCerts
	I0602 17:12:38.878098  284165 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 17:12:38.878140  284165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220602171222-283122
	I0602 17:12:38.910017  284165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/addons-20220602171222-283122/id_rsa Username:docker}
	I0602 17:12:39.001171  284165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0602 17:12:39.020150  284165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 17:12:39.039427  284165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0602 17:12:39.057660  284165 provision.go:86] duration metric: configureAuth took 512.451043ms
	I0602 17:12:39.057693  284165 ubuntu.go:193] setting minikube options for container-runtime
	I0602 17:12:39.057869  284165 config.go:178] Loaded profile config "addons-20220602171222-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 17:12:39.057919  284165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220602171222-283122
	I0602 17:12:39.089506  284165 main.go:134] libmachine: Using SSH client type: native
	I0602 17:12:39.089694  284165 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49447 <nil> <nil>}
	I0602 17:12:39.089712  284165 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 17:12:39.205830  284165 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 17:12:39.205858  284165 ubuntu.go:71] root file system type: overlay
	I0602 17:12:39.206047  284165 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 17:12:39.206106  284165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220602171222-283122
	I0602 17:12:39.238980  284165 main.go:134] libmachine: Using SSH client type: native
	I0602 17:12:39.239188  284165 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49447 <nil> <nil>}
	I0602 17:12:39.239290  284165 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 17:12:39.362307  284165 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 17:12:39.362386  284165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220602171222-283122
	I0602 17:12:39.394220  284165 main.go:134] libmachine: Using SSH client type: native
	I0602 17:12:39.394380  284165 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49447 <nil> <nil>}
	I0602 17:12:39.394399  284165 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 17:12:40.034466  284165 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-02 17:12:39.358616597 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0602 17:12:40.034506  284165 machine.go:91] provisioned docker machine in 1.804142025s
	I0602 17:12:40.034520  284165 client.go:171] LocalClient.Create took 17.254187209s
	I0602 17:12:40.034554  284165 start.go:173] duration metric: libmachine.API.Create for "addons-20220602171222-283122" took 17.254303816s
	I0602 17:12:40.034571  284165 start.go:306] post-start starting for "addons-20220602171222-283122" (driver="docker")
	I0602 17:12:40.034583  284165 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 17:12:40.034657  284165 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 17:12:40.034714  284165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220602171222-283122
	I0602 17:12:40.066192  284165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/addons-20220602171222-283122/id_rsa Username:docker}
	I0602 17:12:40.153950  284165 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 17:12:40.156932  284165 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 17:12:40.156964  284165 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 17:12:40.156976  284165 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 17:12:40.156985  284165 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 17:12:40.157001  284165 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 17:12:40.157090  284165 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 17:12:40.157123  284165 start.go:309] post-start completed in 122.5386ms
	I0602 17:12:40.157440  284165 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20220602171222-283122
	I0602 17:12:40.188472  284165 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/config.json ...
	I0602 17:12:40.188745  284165 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 17:12:40.188789  284165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220602171222-283122
	I0602 17:12:40.219680  284165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/addons-20220602171222-283122/id_rsa Username:docker}
	I0602 17:12:40.301658  284165 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 17:12:40.305744  284165 start.go:134] duration metric: createHost completed in 17.528289604s
	I0602 17:12:40.305779  284165 start.go:81] releasing machines lock for "addons-20220602171222-283122", held for 17.528448117s
	I0602 17:12:40.305883  284165 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20220602171222-283122
	I0602 17:12:40.338144  284165 ssh_runner.go:195] Run: systemctl --version
	I0602 17:12:40.338203  284165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220602171222-283122
	I0602 17:12:40.338214  284165 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 17:12:40.338268  284165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220602171222-283122
	I0602 17:12:40.370295  284165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/addons-20220602171222-283122/id_rsa Username:docker}
	I0602 17:12:40.370630  284165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/addons-20220602171222-283122/id_rsa Username:docker}
	I0602 17:12:40.476047  284165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 17:12:40.485909  284165 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 17:12:40.495659  284165 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 17:12:40.495753  284165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 17:12:40.505187  284165 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 17:12:40.517971  284165 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 17:12:40.593814  284165 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 17:12:40.673656  284165 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 17:12:40.683370  284165 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 17:12:40.757612  284165 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 17:12:40.767727  284165 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 17:12:40.808324  284165 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 17:12:40.850767  284165 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0602 17:12:40.850870  284165 cli_runner.go:164] Run: docker network inspect addons-20220602171222-283122 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 17:12:40.881159  284165 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0602 17:12:40.884493  284165 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 17:12:40.894553  284165 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 17:12:40.894628  284165 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 17:12:40.927239  284165 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0602 17:12:40.927263  284165 docker.go:541] Images already preloaded, skipping extraction
	I0602 17:12:40.927313  284165 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 17:12:40.959418  284165 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0602 17:12:40.959445  284165 cache_images.go:84] Images are preloaded, skipping loading
	I0602 17:12:40.959504  284165 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 17:12:41.043965  284165 cni.go:95] Creating CNI manager for ""
	I0602 17:12:41.043992  284165 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 17:12:41.044004  284165 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 17:12:41.044019  284165 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20220602171222-283122 NodeName:addons-20220602171222-283122 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/li
b/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 17:12:41.044161  284165 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "addons-20220602171222-283122"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 17:12:41.044277  284165 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=addons-20220602171222-283122 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:addons-20220602171222-283122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0602 17:12:41.044329  284165 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0602 17:12:41.051742  284165 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 17:12:41.051820  284165 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 17:12:41.058876  284165 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0602 17:12:41.071721  284165 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 17:12:41.085219  284165 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2050 bytes)
	I0602 17:12:41.098815  284165 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0602 17:12:41.101910  284165 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 17:12:41.112325  284165 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122 for IP: 192.168.49.2
	I0602 17:12:41.112377  284165 certs.go:187] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
	I0602 17:12:41.439452  284165 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt ...
	I0602 17:12:41.439501  284165 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt: {Name:mka936e67ab4068556890ad36424371b36a2941f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:12:41.439750  284165 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key ...
	I0602 17:12:41.439768  284165 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key: {Name:mk101bd8612517d30dd08d8199e7d663467d0620 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:12:41.439902  284165 certs.go:187] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
	I0602 17:12:41.512529  284165 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt ...
	I0602 17:12:41.512570  284165 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt: {Name:mk5882db0b873154c10c7d17648fcc009c45135a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:12:41.512801  284165 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key ...
	I0602 17:12:41.512821  284165 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key: {Name:mkc31a9d3cf5b6243439dd6712891c1c359c0ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:12:41.512973  284165 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.key
	I0602 17:12:41.512998  284165 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt with IP's: []
	I0602 17:12:41.780988  284165 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt ...
	I0602 17:12:41.781048  284165 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: {Name:mk5a311bcf5627f7828451bc31fe2eb181258332 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:12:41.781287  284165 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.key ...
	I0602 17:12:41.781306  284165 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.key: {Name:mk41d7dc0113bcfe66ff4e8998d4069bd1f725d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:12:41.781434  284165 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/apiserver.key.dd3b5fb2
	I0602 17:12:41.781459  284165 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0602 17:12:41.992451  284165 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/apiserver.crt.dd3b5fb2 ...
	I0602 17:12:41.992503  284165 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/apiserver.crt.dd3b5fb2: {Name:mkb452043862fb148516e4969c6d068b16a9a902 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:12:41.992741  284165 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/apiserver.key.dd3b5fb2 ...
	I0602 17:12:41.992759  284165 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/apiserver.key.dd3b5fb2: {Name:mkec7ff3ad0fb747639f130ed5df011eb7f45d08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:12:41.992871  284165 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/apiserver.crt
	I0602 17:12:41.992952  284165 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/apiserver.key
	I0602 17:12:41.993033  284165 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/proxy-client.key
	I0602 17:12:41.993059  284165 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/proxy-client.crt with IP's: []
	I0602 17:12:42.053994  284165 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/proxy-client.crt ...
	I0602 17:12:42.054040  284165 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/proxy-client.crt: {Name:mke62c41b7f53aab313245a737cb2b6d296bc265 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:12:42.054271  284165 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/proxy-client.key ...
	I0602 17:12:42.054291  284165 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/proxy-client.key: {Name:mka06d8c18aeb7fb3fe412396046d5238787de82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:12:42.054545  284165 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
	I0602 17:12:42.054600  284165 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
	I0602 17:12:42.054640  284165 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
	I0602 17:12:42.054675  284165 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1679 bytes)
	I0602 17:12:42.055364  284165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 17:12:42.074262  284165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0602 17:12:42.092298  284165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 17:12:42.110515  284165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0602 17:12:42.128973  284165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 17:12:42.148132  284165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0602 17:12:42.166942  284165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 17:12:42.185103  284165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0602 17:12:42.203524  284165 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 17:12:42.222897  284165 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 17:12:42.236337  284165 ssh_runner.go:195] Run: openssl version
	I0602 17:12:42.241300  284165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 17:12:42.249103  284165 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:12:42.252470  284165 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:12:42.252526  284165 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:12:42.257504  284165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 17:12:42.265271  284165 kubeadm.go:395] StartCluster: {Name:addons-20220602171222-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:addons-20220602171222-283122 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false}
	I0602 17:12:42.265410  284165 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 17:12:42.297620  284165 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 17:12:42.304870  284165 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 17:12:42.311923  284165 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 17:12:42.311987  284165 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 17:12:42.319114  284165 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 17:12:42.319170  284165 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 17:12:42.831285  284165 out.go:204]   - Generating certificates and keys ...
	I0602 17:12:45.186674  284165 out.go:204]   - Booting up control plane ...
	I0602 17:12:52.729283  284165 out.go:204]   - Configuring RBAC rules ...
	I0602 17:12:53.143400  284165 cni.go:95] Creating CNI manager for ""
	I0602 17:12:53.143430  284165 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 17:12:53.143464  284165 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0602 17:12:53.143611  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:12:53.143715  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae minikube.k8s.io/name=addons-20220602171222-283122 minikube.k8s.io/updated_at=2022_06_02T17_12_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:12:53.154321  284165 ops.go:34] apiserver oom_adj: -16
	I0602 17:12:53.359346  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:12:54.322588  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:12:54.822126  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:12:55.322907  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:12:55.821936  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:12:56.322006  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:12:56.822284  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:12:57.322303  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:12:57.822496  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:12:58.322007  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:12:58.822909  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:12:59.322983  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:12:59.822106  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:13:00.322685  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:13:00.822639  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:13:01.322295  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:13:01.822360  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:13:02.322853  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:13:02.822281  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:13:03.322597  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:13:03.822048  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:13:04.322456  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:13:04.822248  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:13:05.322357  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:13:05.822258  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:13:06.322936  284165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:13:06.384364  284165 kubeadm.go:1045] duration metric: took 13.240794344s to wait for elevateKubeSystemPrivileges.
	I0602 17:13:06.384404  284165 kubeadm.go:397] StartCluster complete in 24.119147972s
	I0602 17:13:06.384428  284165 settings.go:142] acquiring lock: {Name:mkca69c8f6bc293fef8b552d09d771e1f2253f12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:13:06.384569  284165 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 17:13:06.384999  284165 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk4aad2ea1df51829b8bb57d56bd4d8e58dc96e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:13:06.903746  284165 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20220602171222-283122" rescaled to 1
	I0602 17:13:06.903841  284165 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 17:13:06.903879  284165 addons.go:415] enableAddons start: toEnable=map[], additional=[registry metrics-server volumesnapshots csi-hostpath-driver gcp-auth ingress ingress-dns helm-tiller]
	I0602 17:13:06.903856  284165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0602 17:13:06.903953  284165 addons.go:65] Setting volumesnapshots=true in profile "addons-20220602171222-283122"
	I0602 17:13:06.903974  284165 addons.go:153] Setting addon volumesnapshots=true in "addons-20220602171222-283122"
	I0602 17:13:06.904023  284165 host.go:66] Checking if "addons-20220602171222-283122" exists ...
	I0602 17:13:06.904042  284165 config.go:178] Loaded profile config "addons-20220602171222-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 17:13:06.904058  284165 addons.go:65] Setting csi-hostpath-driver=true in profile "addons-20220602171222-283122"
	I0602 17:13:06.906362  284165 out.go:177] * Verifying Kubernetes components...
	I0602 17:13:06.904047  284165 addons.go:65] Setting helm-tiller=true in profile "addons-20220602171222-283122"
	I0602 17:13:06.904092  284165 addons.go:153] Setting addon csi-hostpath-driver=true in "addons-20220602171222-283122"
	I0602 17:13:06.904104  284165 addons.go:65] Setting default-storageclass=true in profile "addons-20220602171222-283122"
	I0602 17:13:06.904117  284165 addons.go:65] Setting registry=true in profile "addons-20220602171222-283122"
	I0602 17:13:06.904128  284165 addons.go:65] Setting storage-provisioner=true in profile "addons-20220602171222-283122"
	I0602 17:13:06.904140  284165 addons.go:65] Setting ingress-dns=true in profile "addons-20220602171222-283122"
	I0602 17:13:06.904126  284165 addons.go:65] Setting gcp-auth=true in profile "addons-20220602171222-283122"
	I0602 17:13:06.904125  284165 addons.go:65] Setting metrics-server=true in profile "addons-20220602171222-283122"
	I0602 17:13:06.904171  284165 addons.go:65] Setting ingress=true in profile "addons-20220602171222-283122"
	I0602 17:13:06.904529  284165 cli_runner.go:164] Run: docker container inspect addons-20220602171222-283122 --format={{.State.Status}}
	I0602 17:13:06.907986  284165 addons.go:153] Setting addon helm-tiller=true in "addons-20220602171222-283122"
	I0602 17:13:06.908044  284165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 17:13:06.908055  284165 host.go:66] Checking if "addons-20220602171222-283122" exists ...
	I0602 17:13:06.908074  284165 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20220602171222-283122"
	I0602 17:13:06.908091  284165 addons.go:153] Setting addon registry=true in "addons-20220602171222-283122"
	I0602 17:13:06.908120  284165 addons.go:153] Setting addon ingress-dns=true in "addons-20220602171222-283122"
	I0602 17:13:06.908135  284165 addons.go:153] Setting addon metrics-server=true in "addons-20220602171222-283122"
	I0602 17:13:06.908166  284165 host.go:66] Checking if "addons-20220602171222-283122" exists ...
	I0602 17:13:06.908168  284165 host.go:66] Checking if "addons-20220602171222-283122" exists ...
	I0602 17:13:06.908179  284165 host.go:66] Checking if "addons-20220602171222-283122" exists ...
	I0602 17:13:06.908455  284165 cli_runner.go:164] Run: docker container inspect addons-20220602171222-283122 --format={{.State.Status}}
	I0602 17:13:06.908552  284165 cli_runner.go:164] Run: docker container inspect addons-20220602171222-283122 --format={{.State.Status}}
	I0602 17:13:06.908598  284165 cli_runner.go:164] Run: docker container inspect addons-20220602171222-283122 --format={{.State.Status}}
	I0602 17:13:06.908610  284165 cli_runner.go:164] Run: docker container inspect addons-20220602171222-283122 --format={{.State.Status}}
	I0602 17:13:06.908658  284165 addons.go:153] Setting addon ingress=true in "addons-20220602171222-283122"
	I0602 17:13:06.908687  284165 cli_runner.go:164] Run: docker container inspect addons-20220602171222-283122 --format={{.State.Status}}
	I0602 17:13:06.908686  284165 host.go:66] Checking if "addons-20220602171222-283122" exists ...
	I0602 17:13:06.908074  284165 addons.go:153] Setting addon storage-provisioner=true in "addons-20220602171222-283122"
	I0602 17:13:06.908734  284165 host.go:66] Checking if "addons-20220602171222-283122" exists ...
	W0602 17:13:06.908742  284165 addons.go:165] addon storage-provisioner should already be in state true
	I0602 17:13:06.908783  284165 host.go:66] Checking if "addons-20220602171222-283122" exists ...
	I0602 17:13:06.908092  284165 mustload.go:65] Loading cluster: addons-20220602171222-283122
	I0602 17:13:06.909478  284165 config.go:178] Loaded profile config "addons-20220602171222-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 17:13:06.909989  284165 cli_runner.go:164] Run: docker container inspect addons-20220602171222-283122 --format={{.State.Status}}
	I0602 17:13:06.910054  284165 cli_runner.go:164] Run: docker container inspect addons-20220602171222-283122 --format={{.State.Status}}
	I0602 17:13:06.910067  284165 cli_runner.go:164] Run: docker container inspect addons-20220602171222-283122 --format={{.State.Status}}
	I0602 17:13:06.910400  284165 cli_runner.go:164] Run: docker container inspect addons-20220602171222-283122 --format={{.State.Status}}
	I0602 17:13:06.982233  284165 out.go:177]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0602 17:13:06.985971  284165 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0602 17:13:06.986002  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0602 17:13:06.986069  284165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220602171222-283122
	I0602 17:13:07.003957  284165 out.go:177]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.6.1
	I0602 17:13:07.002885  284165 addons.go:153] Setting addon default-storageclass=true in "addons-20220602171222-283122"
	I0602 17:13:07.005243  284165 host.go:66] Checking if "addons-20220602171222-283122" exists ...
	W0602 17:13:07.005407  284165 addons.go:165] addon default-storageclass should already be in state true
	I0602 17:13:07.005454  284165 host.go:66] Checking if "addons-20220602171222-283122" exists ...
	I0602 17:13:07.005492  284165 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0602 17:13:07.005511  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0602 17:13:07.005581  284165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220602171222-283122
	I0602 17:13:07.006054  284165 cli_runner.go:164] Run: docker container inspect addons-20220602171222-283122 --format={{.State.Status}}
	I0602 17:13:07.009952  284165 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0602 17:13:07.011772  284165 addons.go:348] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0602 17:13:07.011877  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0602 17:13:07.011940  284165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220602171222-283122
	I0602 17:13:07.012028  284165 out.go:177]   - Using image registry:2.7.1
	I0602 17:13:07.011837  284165 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0602 17:13:07.014044  284165 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0602 17:13:07.017125  284165 addons.go:348] installing /etc/kubernetes/addons/registry-rc.yaml
	I0602 17:13:07.017153  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0602 17:13:07.017218  284165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220602171222-283122
	I0602 17:13:07.015637  284165 addons.go:348] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0602 17:13:07.017341  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0602 17:13:07.017372  284165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220602171222-283122
	I0602 17:13:07.026463  284165 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 17:13:07.030922  284165 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 17:13:07.030950  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0602 17:13:07.031019  284165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220602171222-283122
	I0602 17:13:07.032994  284165 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v1.2.0
	I0602 17:13:07.028275  284165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0602 17:13:07.030208  284165 node_ready.go:35] waiting up to 6m0s for node "addons-20220602171222-283122" to be "Ready" ...
	I0602 17:13:07.037111  284165 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
	I0602 17:13:07.039467  284165 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
	I0602 17:13:07.041432  284165 addons.go:348] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0602 17:13:07.041459  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15567 bytes)
	I0602 17:13:07.041524  284165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220602171222-283122
	I0602 17:13:07.040131  284165 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0602 17:13:07.041673  284165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220602171222-283122
	I0602 17:13:07.040236  284165 node_ready.go:49] node "addons-20220602171222-283122" has status "Ready":"True"
	I0602 17:13:07.041764  284165 node_ready.go:38] duration metric: took 6.705212ms waiting for node "addons-20220602171222-283122" to be "Ready" ...
	I0602 17:13:07.041790  284165 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 17:13:07.043825  284165 out.go:177]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0602 17:13:07.045894  284165 out.go:177]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0602 17:13:07.047789  284165 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0602 17:13:07.049674  284165 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0602 17:13:07.051411  284165 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0602 17:13:07.053160  284165 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0602 17:13:07.055176  284165 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0602 17:13:07.057349  284165 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0602 17:13:07.057621  284165 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-9pc6r" in "kube-system" namespace to be "Ready" ...
	I0602 17:13:07.064882  284165 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0602 17:13:07.067474  284165 addons.go:348] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0602 17:13:07.067550  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0602 17:13:07.067631  284165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220602171222-283122
	I0602 17:13:07.075677  284165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/addons-20220602171222-283122/id_rsa Username:docker}
	I0602 17:13:07.089659  284165 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0602 17:13:07.089692  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0602 17:13:07.089758  284165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220602171222-283122
	I0602 17:13:07.111641  284165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/addons-20220602171222-283122/id_rsa Username:docker}
	I0602 17:13:07.129999  284165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/addons-20220602171222-283122/id_rsa Username:docker}
	I0602 17:13:07.133314  284165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/addons-20220602171222-283122/id_rsa Username:docker}
	I0602 17:13:07.147986  284165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/addons-20220602171222-283122/id_rsa Username:docker}
	I0602 17:13:07.168336  284165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/addons-20220602171222-283122/id_rsa Username:docker}
	I0602 17:13:07.173329  284165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/addons-20220602171222-283122/id_rsa Username:docker}
	I0602 17:13:07.176027  284165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/addons-20220602171222-283122/id_rsa Username:docker}
	I0602 17:13:07.178452  284165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/addons-20220602171222-283122/id_rsa Username:docker}
	I0602 17:13:07.178767  284165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/addons-20220602171222-283122/id_rsa Username:docker}
	I0602 17:13:07.351874  284165 addons.go:348] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0602 17:13:07.351912  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0602 17:13:07.358371  284165 addons.go:348] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0602 17:13:07.358403  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0602 17:13:07.455406  284165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0602 17:13:07.456437  284165 addons.go:348] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0602 17:13:07.456464  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0602 17:13:07.535962  284165 addons.go:348] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0602 17:13:07.535997  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0602 17:13:07.541834  284165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 17:13:07.543631  284165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 17:13:07.547068  284165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0602 17:13:07.553760  284165 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0602 17:13:07.642662  284165 addons.go:348] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0602 17:13:07.642766  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0602 17:13:07.645767  284165 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0602 17:13:07.645792  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0602 17:13:07.737887  284165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0602 17:13:07.739373  284165 addons.go:348] installing /etc/kubernetes/addons/registry-svc.yaml
	I0602 17:13:07.739407  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0602 17:13:07.751580  284165 addons.go:153] Setting addon gcp-auth=true in "addons-20220602171222-283122"
	I0602 17:13:07.751651  284165 host.go:66] Checking if "addons-20220602171222-283122" exists ...
	I0602 17:13:07.752251  284165 cli_runner.go:164] Run: docker container inspect addons-20220602171222-283122 --format={{.State.Status}}
	I0602 17:13:07.756242  284165 addons.go:348] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0602 17:13:07.756280  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0602 17:13:07.756609  284165 addons.go:348] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0602 17:13:07.756635  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0602 17:13:07.786938  284165 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0602 17:13:07.787004  284165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20220602171222-283122
	I0602 17:13:07.819101  284165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49447 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/addons-20220602171222-283122/id_rsa Username:docker}
	I0602 17:13:07.839882  284165 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0602 17:13:07.839917  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0602 17:13:07.848166  284165 addons.go:348] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0602 17:13:07.848198  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0602 17:13:08.037940  284165 addons.go:348] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0602 17:13:08.037976  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0602 17:13:08.042371  284165 addons.go:348] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0602 17:13:08.042403  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0602 17:13:08.049729  284165 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0602 17:13:08.049762  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0602 17:13:08.136173  284165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0602 17:13:08.139932  284165 addons.go:348] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0602 17:13:08.139967  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0602 17:13:08.142192  284165 addons.go:348] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0602 17:13:08.142220  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0602 17:13:08.151103  284165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0602 17:13:08.239914  284165 addons.go:348] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0602 17:13:08.239951  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0602 17:13:08.255349  284165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0602 17:13:08.334679  284165 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0602 17:13:08.334731  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0602 17:13:08.354778  284165 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0602 17:13:08.354867  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0602 17:13:08.439223  284165 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0602 17:13:08.439261  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0602 17:13:08.456375  284165 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0602 17:13:08.456405  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0602 17:13:08.538053  284165 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0602 17:13:08.538088  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0602 17:13:08.552432  284165 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0602 17:13:08.552464  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0602 17:13:08.566829  284165 addons.go:348] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0602 17:13:08.566856  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0602 17:13:08.579131  284165 pod_ready.go:92] pod "coredns-64897985d-9pc6r" in "kube-system" namespace has status "Ready":"True"
	I0602 17:13:08.579162  284165 pod_ready.go:81] duration metric: took 1.519934229s waiting for pod "coredns-64897985d-9pc6r" in "kube-system" namespace to be "Ready" ...
	I0602 17:13:08.579173  284165 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-m8ts9" in "kube-system" namespace to be "Ready" ...
	I0602 17:13:08.582145  284165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0602 17:13:09.644821  284165 pod_ready.go:92] pod "coredns-64897985d-m8ts9" in "kube-system" namespace has status "Ready":"True"
	I0602 17:13:09.644852  284165 pod_ready.go:81] duration metric: took 1.065671446s waiting for pod "coredns-64897985d-m8ts9" in "kube-system" namespace to be "Ready" ...
	I0602 17:13:09.644871  284165 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20220602171222-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:13:09.652055  284165 pod_ready.go:92] pod "etcd-addons-20220602171222-283122" in "kube-system" namespace has status "Ready":"True"
	I0602 17:13:09.652091  284165 pod_ready.go:81] duration metric: took 7.210202ms waiting for pod "etcd-addons-20220602171222-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:13:09.652106  284165 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20220602171222-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:13:09.735620  284165 pod_ready.go:92] pod "kube-apiserver-addons-20220602171222-283122" in "kube-system" namespace has status "Ready":"True"
	I0602 17:13:09.735651  284165 pod_ready.go:81] duration metric: took 83.535537ms waiting for pod "kube-apiserver-addons-20220602171222-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:13:09.735667  284165 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20220602171222-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:13:09.742210  284165 pod_ready.go:92] pod "kube-controller-manager-addons-20220602171222-283122" in "kube-system" namespace has status "Ready":"True"
	I0602 17:13:09.742243  284165 pod_ready.go:81] duration metric: took 6.56684ms waiting for pod "kube-controller-manager-addons-20220602171222-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:13:09.742257  284165 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d582n" in "kube-system" namespace to be "Ready" ...
	I0602 17:13:09.838994  284165 pod_ready.go:92] pod "kube-proxy-d582n" in "kube-system" namespace has status "Ready":"True"
	I0602 17:13:09.839025  284165 pod_ready.go:81] duration metric: took 96.761133ms waiting for pod "kube-proxy-d582n" in "kube-system" namespace to be "Ready" ...
	I0602 17:13:09.839038  284165 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20220602171222-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:13:09.940388  284165 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.905001601s)
	I0602 17:13:09.940430  284165 start.go:806] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0602 17:13:10.247462  284165 pod_ready.go:92] pod "kube-scheduler-addons-20220602171222-283122" in "kube-system" namespace has status "Ready":"True"
	I0602 17:13:10.247494  284165 pod_ready.go:81] duration metric: took 408.447099ms waiting for pod "kube-scheduler-addons-20220602171222-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:13:10.247506  284165 pod_ready.go:38] duration metric: took 3.205683127s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 17:13:10.247536  284165 api_server.go:51] waiting for apiserver process to appear ...
	I0602 17:13:10.247611  284165 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 17:13:10.540818  284165 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.997086971s)
	I0602 17:13:10.540873  284165 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.085434521s)
	I0602 17:13:10.540903  284165 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.998978773s)
	I0602 17:13:11.458664  284165 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (3.911549627s)
	I0602 17:13:11.458777  284165 addons.go:386] Verifying addon ingress=true in "addons-20220602171222-283122"
	I0602 17:13:11.460828  284165 out.go:177] * Verifying ingress addon...
	I0602 17:13:11.459180  284165 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (3.721248702s)
	I0602 17:13:11.459235  284165 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.323030322s)
	I0602 17:13:11.459304  284165 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.308170021s)
	I0602 17:13:11.459347  284165 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.672376717s)
	I0602 17:13:11.462429  284165 addons.go:386] Verifying addon registry=true in "addons-20220602171222-283122"
	I0602 17:13:11.462459  284165 addons.go:386] Verifying addon metrics-server=true in "addons-20220602171222-283122"
	I0602 17:13:11.464206  284165 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0
	I0602 17:13:11.463388  284165 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0602 17:13:11.466023  284165 out.go:177] * Verifying registry addon...
	I0602 17:13:11.469307  284165 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.8
	I0602 17:13:11.470839  284165 addons.go:348] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0602 17:13:11.470873  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0602 17:13:11.471597  284165 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0602 17:13:11.538672  284165 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0602 17:13:11.538753  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:11.539260  284165 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0602 17:13:11.539288  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:11.556583  284165 addons.go:348] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0602 17:13:11.556667  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0602 17:13:11.559725  284165 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.304322691s)
	W0602 17:13:11.559793  284165 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0602 17:13:11.559817  284165 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0602 17:13:11.653381  284165 addons.go:348] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0602 17:13:11.653419  284165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (4842 bytes)
	I0602 17:13:11.755613  284165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0602 17:13:11.837189  284165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0602 17:13:12.051418  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:12.052555  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:12.548271  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:12.553490  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:12.840385  284165 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.592739834s)
	I0602 17:13:12.840473  284165 api_server.go:71] duration metric: took 5.936592628s to wait for apiserver process to appear ...
	I0602 17:13:12.840494  284165 api_server.go:87] waiting for apiserver healthz status ...
	I0602 17:13:12.840517  284165 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0602 17:13:12.840964  284165 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.25807952s)
	I0602 17:13:12.841055  284165 addons.go:386] Verifying addon csi-hostpath-driver=true in "addons-20220602171222-283122"
	I0602 17:13:12.842973  284165 out.go:177] * Verifying csi-hostpath-driver addon...
	I0602 17:13:12.846363  284165 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0602 17:13:12.848871  284165 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0602 17:13:12.849818  284165 api_server.go:140] control plane version: v1.23.6
	I0602 17:13:12.849874  284165 api_server.go:130] duration metric: took 9.36325ms to wait for apiserver health ...
	I0602 17:13:12.849895  284165 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 17:13:12.852012  284165 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0602 17:13:12.852068  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:12.860378  284165 system_pods.go:59] 20 kube-system pods found
	I0602 17:13:12.860460  284165 system_pods.go:61] "coredns-64897985d-9pc6r" [24673847-58dc-4d8a-9fdf-d58064a61355] Running
	I0602 17:13:12.860479  284165 system_pods.go:61] "coredns-64897985d-m8ts9" [4472222b-f4ca-41cf-bdf5-cf9b501ed9dc] Running
	I0602 17:13:12.860496  284165 system_pods.go:61] "csi-hostpath-attacher-0" [e7f7efef-77af-488b-a0f9-18431749c433] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) didn't match pod affinity rules.)
	I0602 17:13:12.860503  284165 system_pods.go:61] "csi-hostpath-provisioner-0" [8bcae346-bec7-4987-ad5b-4d38dc2f2736] Pending
	I0602 17:13:12.860521  284165 system_pods.go:61] "csi-hostpath-resizer-0" [35a240e2-bec9-47a1-a549-539b4e9674a9] Pending
	I0602 17:13:12.860535  284165 system_pods.go:61] "csi-hostpath-snapshotter-0" [f613499c-604d-4155-b8fe-eea174ce4fd1] Pending
	I0602 17:13:12.860542  284165 system_pods.go:61] "csi-hostpathplugin-0" [02fe28d9-b59f-472c-8acd-1b4df67b0953] Pending
	I0602 17:13:12.860550  284165 system_pods.go:61] "etcd-addons-20220602171222-283122" [d0bdf632-934e-4020-a667-c1011db93687] Running
	I0602 17:13:12.860558  284165 system_pods.go:61] "kube-apiserver-addons-20220602171222-283122" [f167bf12-552a-4cab-b1fb-d35b7d657c05] Running
	I0602 17:13:12.860571  284165 system_pods.go:61] "kube-controller-manager-addons-20220602171222-283122" [5112ae49-c1eb-40d1-bc4e-b73f4344e6c2] Running
	I0602 17:13:12.860581  284165 system_pods.go:61] "kube-ingress-dns-minikube" [bfe439a2-c0f8-4b88-b44e-7b0b8950a157] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0602 17:13:12.860588  284165 system_pods.go:61] "kube-proxy-d582n" [cc15191d-2c33-4338-b120-6e45ab0dc170] Running
	I0602 17:13:12.860600  284165 system_pods.go:61] "kube-scheduler-addons-20220602171222-283122" [3a3a13ec-a682-450b-ad72-93a8178c7124] Running
	I0602 17:13:12.860609  284165 system_pods.go:61] "metrics-server-bd6f4dd56-pffxq" [cc211685-0c70-4102-ab95-dc5b915d0d22] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 17:13:12.860623  284165 system_pods.go:61] "registry-proxy-bglxc" [aa97146a-3619-43a6-b652-34d2cff4c61a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0602 17:13:12.860634  284165 system_pods.go:61] "registry-twtpk" [164ddabb-ed64-4c3b-b6d0-febaab2c0a84] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0602 17:13:12.860662  284165 system_pods.go:61] "snapshot-controller-7f76975c56-54h9n" [2b1b763b-76cf-477a-b9e7-2d844fd07c35] Pending
	I0602 17:13:12.860675  284165 system_pods.go:61] "snapshot-controller-7f76975c56-7tmt4" [3c02e95d-cf9d-4a5d-b7fc-b50092c93f79] Pending
	I0602 17:13:12.860687  284165 system_pods.go:61] "storage-provisioner" [96a979d3-17d5-455e-a642-c427a368f622] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 17:13:12.860722  284165 system_pods.go:61] "tiller-deploy-6d67d5465d-kj6rb" [5dc0e137-6b88-4169-b52e-81cd98c2f8f7] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0602 17:13:12.860756  284165 system_pods.go:74] duration metric: took 10.846496ms to wait for pod list to return data ...
	I0602 17:13:12.860779  284165 default_sa.go:34] waiting for default service account to be created ...
	I0602 17:13:12.863310  284165 default_sa.go:45] found service account: "default"
	I0602 17:13:12.863360  284165 default_sa.go:55] duration metric: took 2.572871ms for default service account to be created ...
	I0602 17:13:12.863374  284165 system_pods.go:116] waiting for k8s-apps to be running ...
	I0602 17:13:12.949185  284165 system_pods.go:86] 20 kube-system pods found
	I0602 17:13:12.949284  284165 system_pods.go:89] "coredns-64897985d-9pc6r" [24673847-58dc-4d8a-9fdf-d58064a61355] Running
	I0602 17:13:12.949305  284165 system_pods.go:89] "coredns-64897985d-m8ts9" [4472222b-f4ca-41cf-bdf5-cf9b501ed9dc] Running
	I0602 17:13:12.949327  284165 system_pods.go:89] "csi-hostpath-attacher-0" [e7f7efef-77af-488b-a0f9-18431749c433] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) didn't match pod affinity rules.)
	I0602 17:13:12.949344  284165 system_pods.go:89] "csi-hostpath-provisioner-0" [8bcae346-bec7-4987-ad5b-4d38dc2f2736] Pending
	I0602 17:13:12.949359  284165 system_pods.go:89] "csi-hostpath-resizer-0" [35a240e2-bec9-47a1-a549-539b4e9674a9] Pending
	I0602 17:13:12.949375  284165 system_pods.go:89] "csi-hostpath-snapshotter-0" [f613499c-604d-4155-b8fe-eea174ce4fd1] Pending
	I0602 17:13:12.949390  284165 system_pods.go:89] "csi-hostpathplugin-0" [02fe28d9-b59f-472c-8acd-1b4df67b0953] Pending
	I0602 17:13:12.949406  284165 system_pods.go:89] "etcd-addons-20220602171222-283122" [d0bdf632-934e-4020-a667-c1011db93687] Running
	I0602 17:13:12.949421  284165 system_pods.go:89] "kube-apiserver-addons-20220602171222-283122" [f167bf12-552a-4cab-b1fb-d35b7d657c05] Running
	I0602 17:13:12.949437  284165 system_pods.go:89] "kube-controller-manager-addons-20220602171222-283122" [5112ae49-c1eb-40d1-bc4e-b73f4344e6c2] Running
	I0602 17:13:12.949455  284165 system_pods.go:89] "kube-ingress-dns-minikube" [bfe439a2-c0f8-4b88-b44e-7b0b8950a157] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0602 17:13:12.949473  284165 system_pods.go:89] "kube-proxy-d582n" [cc15191d-2c33-4338-b120-6e45ab0dc170] Running
	I0602 17:13:12.949503  284165 system_pods.go:89] "kube-scheduler-addons-20220602171222-283122" [3a3a13ec-a682-450b-ad72-93a8178c7124] Running
	I0602 17:13:12.949522  284165 system_pods.go:89] "metrics-server-bd6f4dd56-pffxq" [cc211685-0c70-4102-ab95-dc5b915d0d22] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0602 17:13:12.949541  284165 system_pods.go:89] "registry-proxy-bglxc" [aa97146a-3619-43a6-b652-34d2cff4c61a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0602 17:13:12.949560  284165 system_pods.go:89] "registry-twtpk" [164ddabb-ed64-4c3b-b6d0-febaab2c0a84] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0602 17:13:12.949595  284165 system_pods.go:89] "snapshot-controller-7f76975c56-54h9n" [2b1b763b-76cf-477a-b9e7-2d844fd07c35] Pending
	I0602 17:13:12.949615  284165 system_pods.go:89] "snapshot-controller-7f76975c56-7tmt4" [3c02e95d-cf9d-4a5d-b7fc-b50092c93f79] Pending
	I0602 17:13:12.949632  284165 system_pods.go:89] "storage-provisioner" [96a979d3-17d5-455e-a642-c427a368f622] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 17:13:12.949649  284165 system_pods.go:89] "tiller-deploy-6d67d5465d-kj6rb" [5dc0e137-6b88-4169-b52e-81cd98c2f8f7] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0602 17:13:12.949676  284165 system_pods.go:126] duration metric: took 86.293319ms to wait for k8s-apps to be running ...
	I0602 17:13:12.949702  284165 system_svc.go:44] waiting for kubelet service to be running ....
	I0602 17:13:12.949776  284165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 17:13:13.045388  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:13.046954  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:13.361321  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:13.543660  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:13.544932  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:13.859358  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:14.051092  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:14.052607  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:14.360910  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:14.543792  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:14.544712  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:14.860834  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:15.045350  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:15.046362  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:15.359241  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:15.545117  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:15.546165  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:15.936072  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:16.046874  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:16.048390  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:16.359152  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:16.448102  284165 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (4.692436317s)
	I0602 17:13:16.450699  284165 addons.go:386] Verifying addon gcp-auth=true in "addons-20220602171222-283122"
	I0602 17:13:16.453251  284165 out.go:177] * Verifying gcp-auth addon...
	I0602 17:13:16.456302  284165 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0602 17:13:16.460526  284165 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0602 17:13:16.460561  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:16.547107  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:16.548008  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:16.859962  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:16.964817  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:17.044990  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:17.045200  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:17.052878  284165 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.215580403s)
	I0602 17:13:17.052925  284165 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (4.103000058s)
	I0602 17:13:17.052952  284165 system_svc.go:56] duration metric: took 4.103246853s WaitForService to wait for kubelet.
	I0602 17:13:17.052968  284165 kubeadm.go:572] duration metric: took 10.149085203s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0602 17:13:17.053003  284165 node_conditions.go:102] verifying NodePressure condition ...
	I0602 17:13:17.056421  284165 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0602 17:13:17.056457  284165 node_conditions.go:123] node cpu capacity is 8
	I0602 17:13:17.056473  284165 node_conditions.go:105] duration metric: took 3.436499ms to run NodePressure ...
	I0602 17:13:17.056488  284165 start.go:213] waiting for startup goroutines ...
	I0602 17:13:17.359256  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:17.464622  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:17.544316  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:17.544600  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:17.859896  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:17.965354  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:18.044185  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:18.044478  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:18.358925  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:18.464756  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:18.545133  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:18.545865  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:18.859462  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:18.964380  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:19.043528  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:19.043743  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:19.358763  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:19.464588  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:19.545059  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:19.545321  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:19.857660  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:19.965143  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:20.043591  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:20.044419  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:20.359711  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:20.467969  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:20.544721  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:20.545545  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:20.858786  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:20.964930  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:21.048421  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:21.048683  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:21.359261  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:21.465286  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:21.543384  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:21.544312  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:21.859496  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:21.964354  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:22.043156  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:22.043341  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:22.357959  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:22.464662  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:22.543792  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:22.543794  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:22.859541  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:22.964728  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:23.044911  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:23.045663  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:23.359544  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:23.464825  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:23.543822  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:23.544067  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:23.858857  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:23.965423  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:24.043811  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:24.043838  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:24.462553  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:24.464008  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:24.544282  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:24.544489  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:24.859263  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:24.964356  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:25.043946  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:25.044261  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:25.359426  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:25.464816  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:25.544501  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:25.545173  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:25.859202  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:25.964608  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:26.044633  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:26.045331  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:26.358213  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:26.464747  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:26.544280  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:26.544385  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:26.859006  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:26.966068  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:27.044938  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:27.045949  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:27.358885  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:27.465248  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:27.543391  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:27.544528  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:27.859452  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:27.964965  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:28.045469  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:28.046104  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:28.358221  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:28.464564  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:28.543815  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:28.544007  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:28.862869  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:28.965363  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:29.044292  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:29.044375  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:29.358291  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:29.465155  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:29.544522  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:29.544621  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:29.858731  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:29.964319  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:30.043743  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:30.044083  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:30.359227  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:30.464345  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:30.543451  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:30.543603  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:30.892195  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:30.964551  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:31.043726  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:31.043741  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:31.359847  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:31.465298  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:31.543645  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:31.543732  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:31.858517  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:31.964779  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:32.044920  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:32.045088  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0602 17:13:32.358962  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:32.464337  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:32.544897  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:32.545039  284165 kapi.go:108] duration metric: took 21.073415342s to wait for kubernetes.io/minikube-addons=registry ...
	I0602 17:13:32.859338  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:32.964547  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:33.044042  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:33.359622  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:33.465208  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:33.544455  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:33.858228  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:33.964794  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:34.043802  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:34.358593  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:34.464561  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:34.573113  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:34.859129  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:34.965218  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:35.044583  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:35.359833  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:35.465112  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:35.544313  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:35.858098  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:35.964749  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:36.045131  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:36.359887  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:36.465247  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:36.544715  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:36.859086  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:36.965406  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:37.043581  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:37.358224  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:37.465220  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:37.544975  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:37.859257  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:37.964545  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:38.043854  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:38.358767  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:38.464613  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:38.543325  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:38.858938  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:38.964509  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:39.044047  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:39.385416  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:39.465163  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:39.544270  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:39.858290  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:39.964705  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:40.044063  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:40.357835  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:40.464227  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:40.543493  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:40.858727  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:40.965586  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:41.043985  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:41.358262  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:41.465295  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:41.604299  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:41.858441  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:41.965424  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:42.043528  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:42.359326  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:42.465493  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:42.543930  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:42.858623  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:42.965172  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:43.044709  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:43.359631  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:43.464887  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:43.544400  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:43.859427  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:43.964781  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:44.046298  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:44.357844  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:44.464268  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:44.544101  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:44.858961  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:44.965342  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:45.044239  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:45.358798  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:45.465382  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:45.544084  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:45.860269  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:45.965194  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:46.048898  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:46.366927  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:46.470964  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:46.543929  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:46.859171  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:46.965542  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:47.043669  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:47.362711  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:47.465274  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:47.543742  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:47.859095  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:47.964531  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:48.043871  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:48.359038  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:48.464967  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:48.543855  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:48.858869  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:49.023515  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:49.093651  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:49.360724  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:49.465281  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:49.543762  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:49.857824  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:49.965511  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:50.043893  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:50.358116  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:50.464261  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:50.542819  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:50.858938  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:50.964777  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:51.043814  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:51.359363  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:51.640865  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:51.641325  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:51.859336  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:51.964247  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:52.044302  284165 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0602 17:13:52.358748  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:52.465287  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:52.543785  284165 kapi.go:108] duration metric: took 41.08039891s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0602 17:13:52.861972  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:52.965259  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:53.358859  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:53.464667  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:53.939379  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:54.036347  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:54.359405  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:54.464141  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:54.860114  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0602 17:13:54.965293  284165 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0602 17:13:55.359187  284165 kapi.go:108] duration metric: took 42.512823918s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0602 17:13:55.464146  284165 kapi.go:108] duration metric: took 39.007842201s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0602 17:13:55.466506  284165 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-20220602171222-283122 cluster.
	I0602 17:13:55.468181  284165 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0602 17:13:55.469899  284165 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0602 17:13:55.471675  284165 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, default-storageclass, helm-tiller, metrics-server, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0602 17:13:55.473373  284165 addons.go:417] enableAddons completed in 48.569496051s
	I0602 17:13:55.514946  284165 start.go:504] kubectl: 1.24.1, cluster: 1.23.6 (minor skew: 1)
	I0602 17:13:55.517324  284165 out.go:177] * Done! kubectl is now configured to use "addons-20220602171222-283122" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-06-02 17:12:38 UTC, end at Thu 2022-06-02 17:17:27 UTC. --
	Jun 02 17:14:18 addons-20220602171222-283122 dockerd[491]: time="2022-06-02T17:14:18.927617924Z" level=info msg="ignoring event" container=77222a4d9762c3b6bdbc43e4895d7fdac9d1784d19dd9d2ffef217ca4edb1166 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:14:31 addons-20220602171222-283122 dockerd[491]: time="2022-06-02T17:14:31.952142349Z" level=info msg="ignoring event" container=9ed6dae555fd04d8e3ab0ef9bb9e7c79a4a7fb25b0dc0aa7d18ce0c37636a6ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:14:32 addons-20220602171222-283122 dockerd[491]: time="2022-06-02T17:14:32.007139102Z" level=info msg="ignoring event" container=b9728d329e608d7250017c455ea6fdf54ec32a8ccccd3af169c2e82e1c322fd4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:14:33 addons-20220602171222-283122 dockerd[491]: time="2022-06-02T17:14:33.351913059Z" level=info msg="ignoring event" container=b6f78bddaaefce5b33559ad77610452e7a2a608abce4d1a95f7eb3870c9c0c84 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:14:33 addons-20220602171222-283122 dockerd[491]: time="2022-06-02T17:14:33.642339124Z" level=info msg="ignoring event" container=f9fe6694c27cb0d7fdb4595e820edfda37ad8f952e26484103c181c29d44b13a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:14:33 addons-20220602171222-283122 dockerd[491]: time="2022-06-02T17:14:33.734825179Z" level=info msg="ignoring event" container=b9a5ff9bfe806d8fe2553fe1025a23da8a5709851bd287ccde54710a404c7d4c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:14:33 addons-20220602171222-283122 dockerd[491]: time="2022-06-02T17:14:33.736076411Z" level=info msg="ignoring event" container=b89bae8128cecbc3451e09b3293bf96f5c9cbd1f23238a01ec1a33f15fafa23b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:14:33 addons-20220602171222-283122 dockerd[491]: time="2022-06-02T17:14:33.737252298Z" level=info msg="ignoring event" container=3e3a9767e08dcd14e5091ef267ceeba3faef1b9b885701c99abb46fa35aadebc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:14:33 addons-20220602171222-283122 dockerd[491]: time="2022-06-02T17:14:33.738003578Z" level=info msg="ignoring event" container=a308a46d2aff80d71c7bb55828f35f327af7f1c9b69b3a9393002ac7fb1d8c56 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:14:33 addons-20220602171222-283122 dockerd[491]: time="2022-06-02T17:14:33.741467713Z" level=info msg="ignoring event" container=998d65a4c243ad96e41cf5d5b976ab8977d530c25ba46576de32c2e73e35d667 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:14:33 addons-20220602171222-283122 dockerd[491]: time="2022-06-02T17:14:33.741508376Z" level=info msg="ignoring event" container=f9db4d7448b38088a8dc54f6fc814d13568e63f9deef001b2d0375e8cecf66b8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:14:33 addons-20220602171222-283122 dockerd[491]: time="2022-06-02T17:14:33.855088954Z" level=info msg="ignoring event" container=89758a57ea0d550943ca69a478ba54634adc2ac0b29eb44a0c9d9048f70c8288 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:14:33 addons-20220602171222-283122 dockerd[491]: time="2022-06-02T17:14:33.859577891Z" level=info msg="ignoring event" container=2c29a846c69b86fa1f8acb57d240d8c68bcc4384860806766bebb7e583208f95 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:14:34 addons-20220602171222-283122 dockerd[491]: time="2022-06-02T17:14:34.034638455Z" level=info msg="ignoring event" container=b7b9b263c5125097ea64b862d9b322a1c914f6acab98a79d0335a891bfb2e33c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:14:34 addons-20220602171222-283122 dockerd[491]: time="2022-06-02T17:14:34.035044127Z" level=info msg="ignoring event" container=e7c29bb3d4fc92282f198fc7ca7a082784f97744d40cd4457e2c888f54ad36a6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:14:34 addons-20220602171222-283122 dockerd[491]: time="2022-06-02T17:14:34.035984093Z" level=info msg="ignoring event" container=ace29ad6e7d3c62f34602608c001a9e76a21d70b8e08a52163d8bb39196b75ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:14:34 addons-20220602171222-283122 dockerd[491]: time="2022-06-02T17:14:34.059231904Z" level=info msg="ignoring event" container=b8e6d7706b13f6a426fd33ca4b7fb19a76d9fffab892ed5dc033eb4793d59b61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:14:40 addons-20220602171222-283122 dockerd[491]: time="2022-06-02T17:14:40.352373025Z" level=info msg="ignoring event" container=c9aceb48df7dc2fcce95380a84a82f4859e6c8644873981506cb4ce1adf2fe58 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:14:40 addons-20220602171222-283122 dockerd[491]: time="2022-06-02T17:14:40.353808397Z" level=info msg="ignoring event" container=32cae51fc3a4763299acf6b744628d475cc44ef971619b62f08cee86c1fd056d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:14:40 addons-20220602171222-283122 dockerd[491]: time="2022-06-02T17:14:40.464828372Z" level=info msg="ignoring event" container=5b1920770ad447883c25463f5c921912ab2b01fd20205defe28a23efdde4ed30 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:14:40 addons-20220602171222-283122 dockerd[491]: time="2022-06-02T17:14:40.465614909Z" level=info msg="ignoring event" container=3d45f9f4bf1812c358d383ce52530fbaf981b4ff8c9d2f46ac221d3d245ef415 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:14:41 addons-20220602171222-283122 dockerd[491]: time="2022-06-02T17:14:41.049039702Z" level=info msg="ignoring event" container=42d6c6a0ebce05e365fa493a091a6007e96a63c6071648c8b3db63826f4ed754 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:14:42 addons-20220602171222-283122 dockerd[491]: time="2022-06-02T17:14:42.997724979Z" level=info msg="ignoring event" container=09f3082fe78d59f19d3c07bd3cbeffcd069730b14ecca30f163c80a2218a8f4b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:17:26 addons-20220602171222-283122 dockerd[491]: time="2022-06-02T17:17:26.162800221Z" level=info msg="ignoring event" container=c705ad68ab9cb657931fdda31e90b636eea6547aac1744da547490a211711bd9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:17:26 addons-20220602171222-283122 dockerd[491]: time="2022-06-02T17:17:26.278210601Z" level=info msg="ignoring event" container=d24fb865890caaa2aa0056f6cf2604d04fdf0def25376a1af9b2f5dfca9dd243 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                  CREATED             STATE               NAME                      ATTEMPT             POD ID
	f1c256549f70f       gcr.io/google-samples/hello-app@sha256:88b205d7995332e10e836514fbfd59ecaf8976fc15060cd66e85cdcebe7fb356                3 minutes ago       Running             hello-world-app           0                   5147b5d08adaa
	9f90b3eb62502       nginx@sha256:a74534e76ee1121d418fa7394ca930eb67440deda413848bc67c68138535b989                                          3 minutes ago       Running             nginx                     0                   1c15e80843b31
	828b246c65f8c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:26c7b2454f1c946d7c80839251d939606620f37c2f275be2796c1ffd96c438f6           3 minutes ago       Running             gcp-auth                  0                   d841dc3480e55
	dfb331db53b41       gcr.io/google_containers/kube-registry-proxy@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da   3 minutes ago       Running             registry-proxy            0                   064ce49e43bb8
	7ceac803abc2c       6e38f40d628db                                                                                                          4 minutes ago       Running             storage-provisioner       0                   2ea5ad26d8491
	186d4c784a609       a4ca41631cc7a                                                                                                          4 minutes ago       Running             coredns                   0                   3db13a168cc63
	c55be71fcb230       4c03754524064                                                                                                          4 minutes ago       Running             kube-proxy                0                   179b2e6a351cd
	26b0a45ff6660       595f327f224a4                                                                                                          4 minutes ago       Running             kube-scheduler            0                   d3edcf0346afc
	a7c7f919dfc8d       25f8c7f3da61c                                                                                                          4 minutes ago       Running             etcd                      0                   873e960e1df7d
	134019186c6b3       df7b72818ad2e                                                                                                          4 minutes ago       Running             kube-controller-manager   0                   8d6ae7a1c0319
	5215ecfded9be       8fa62c12256df                                                                                                          4 minutes ago       Running             kube-apiserver            0                   d83ab976d03d7
	
	* 
	* ==> coredns [186d4c784a60] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20220602171222-283122
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-20220602171222-283122
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae
	                    minikube.k8s.io/name=addons-20220602171222-283122
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_02T17_12_53_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20220602171222-283122
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Jun 2022 17:12:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20220602171222-283122
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Jun 2022 17:17:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Jun 2022 17:14:35 +0000   Thu, 02 Jun 2022 17:12:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Jun 2022 17:14:35 +0000   Thu, 02 Jun 2022 17:12:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Jun 2022 17:14:35 +0000   Thu, 02 Jun 2022 17:12:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Jun 2022 17:14:35 +0000   Thu, 02 Jun 2022 17:13:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-20220602171222-283122
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873816Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873816Ki
	  pods:               110
	System Info:
	  Machine ID:                 a34bb2508bce429bb90502b0ef044420
	  System UUID:                79981cdf-e315-492c-9c8e-ea5502893ddf
	  Boot ID:                    eac629ea-39e3-4b75-b891-94bd750a4fe6
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86d5b6469f-ghpxf                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m14s
	  default                     nginx                                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  gcp-auth                    gcp-auth-59b76855d9-s26dt                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 coredns-64897985d-m8ts9                                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m21s
	  kube-system                 etcd-addons-20220602171222-283122                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m33s
	  kube-system                 kube-apiserver-addons-20220602171222-283122             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 kube-controller-manager-addons-20220602171222-283122    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 kube-proxy-d582n                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 kube-scheduler-addons-20220602171222-283122             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 registry-proxy-bglxc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 4m21s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  4m42s (x5 over 4m42s)  kubelet     Node addons-20220602171222-283122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m42s (x4 over 4m42s)  kubelet     Node addons-20220602171222-283122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m42s (x4 over 4m42s)  kubelet     Node addons-20220602171222-283122 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m42s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 4m42s                  kubelet     Starting kubelet.
	  Normal  Starting                 4m34s                  kubelet     Starting kubelet.
	  Normal  NodeHasNoDiskPressure    4m34s                  kubelet     Node addons-20220602171222-283122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m34s                  kubelet     Node addons-20220602171222-283122 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             4m34s                  kubelet     Node addons-20220602171222-283122 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  4m34s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m34s                  kubelet     Node addons-20220602171222-283122 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                4m24s                  kubelet     Node addons-20220602171222-283122 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000032] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	[  +2.947754] IPv4: martian source 10.244.0.133 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	[  +1.019849] IPv4: martian source 10.244.0.133 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	[  +1.023843] IPv4: martian source 10.244.0.133 from 10.244.0.4, on dev eth0
	[  +0.000036] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	[Jun 2 17:03] IPv4: martian source 10.244.0.133 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	[  +1.025582] IPv4: martian source 10.244.0.133 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	[  +1.023870] IPv4: martian source 10.244.0.133 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	[  +2.947784] IPv4: martian source 10.244.0.133 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	[  +1.019868] IPv4: martian source 10.244.0.133 from 10.244.0.4, on dev eth0
	[  +0.000041] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	[  +1.023839] IPv4: martian source 10.244.0.133 from 10.244.0.4, on dev eth0
	[  +0.000032] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	[  +2.959752] IPv4: martian source 10.244.0.133 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	[  +1.007840] IPv4: martian source 10.244.0.133 from 10.244.0.4, on dev eth0
	[  +0.000034] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	[  +1.023865] IPv4: martian source 10.244.0.133 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	
	* 
	* ==> etcd [a7c7f919dfc8] <==
	* {"level":"info","ts":"2022-06-02T17:12:47.451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-06-02T17:12:47.451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-06-02T17:12:47.451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-02T17:12:47.451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-06-02T17:12:47.451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-02T17:12:47.451Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:12:47.452Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:12:47.452Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:12:47.453Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:12:47.453Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-20220602171222-283122 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-02T17:12:47.453Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T17:12:47.454Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T17:12:47.454Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-02T17:12:47.454Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-02T17:12:47.455Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-02T17:12:47.456Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2022-06-02T17:13:24.457Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"103.677033ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:19 size:85244"}
	{"level":"info","ts":"2022-06-02T17:13:24.458Z","caller":"traceutil/trace.go:171","msg":"trace[1693965874] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:19; response_revision:885; }","duration":"103.789701ms","start":"2022-06-02T17:13:24.354Z","end":"2022-06-02T17:13:24.458Z","steps":["trace[1693965874] 'agreement among raft nodes before linearized reading'  (duration: 34.568199ms)","trace[1693965874] 'range keys from in-memory index tree'  (duration: 68.977298ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-02T17:13:46.362Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"126.381858ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-02T17:13:46.362Z","caller":"traceutil/trace.go:171","msg":"trace[1807581828] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1038; }","duration":"126.507903ms","start":"2022-06-02T17:13:46.236Z","end":"2022-06-02T17:13:46.362Z","steps":["trace[1807581828] 'range keys from in-memory index tree'  (duration: 126.26797ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T17:13:46.362Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"107.818411ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2022-06-02T17:13:46.362Z","caller":"traceutil/trace.go:171","msg":"trace[894271934] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1038; }","duration":"107.917754ms","start":"2022-06-02T17:13:46.254Z","end":"2022-06-02T17:13:46.362Z","steps":["trace[894271934] 'range keys from in-memory index tree'  (duration: 107.666431ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-02T17:13:51.638Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"176.381073ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:9857"}
	{"level":"info","ts":"2022-06-02T17:13:51.638Z","caller":"traceutil/trace.go:171","msg":"trace[526464125] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1050; }","duration":"176.478968ms","start":"2022-06-02T17:13:51.462Z","end":"2022-06-02T17:13:51.638Z","steps":["trace[526464125] 'range keys from in-memory index tree'  (duration: 176.257457ms)"],"step_count":1}
	{"level":"info","ts":"2022-06-02T17:13:51.795Z","caller":"traceutil/trace.go:171","msg":"trace[491043559] transaction","detail":"{read_only:false; response_revision:1051; number_of_response:1; }","duration":"140.731258ms","start":"2022-06-02T17:13:51.654Z","end":"2022-06-02T17:13:51.795Z","steps":["trace[491043559] 'process raft request'  (duration: 90.45629ms)","trace[491043559] 'compare'  (duration: 50.145927ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  17:17:27 up  2:00,  0 users,  load average: 0.07, 0.40, 0.79
	Linux addons-20220602171222-283122 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [5215ecfded9b] <==
	* E0602 17:13:11.160704       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0602 17:13:11.160716       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0602 17:13:11.843228       1 alloc.go:329] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs=map[IPv4:10.96.163.166]
	I0602 17:13:11.859154       1 controller.go:611] quota admission added evaluator for: statefulsets.apps
	I0602 17:13:12.046602       1 alloc.go:329] "allocated clusterIPs" service="kube-system/csi-hostpathplugin" clusterIPs=map[IPv4:10.110.167.32]
	I0602 17:13:12.242922       1 alloc.go:329] "allocated clusterIPs" service="kube-system/csi-hostpath-provisioner" clusterIPs=map[IPv4:10.108.135.89]
	I0602 17:13:12.450690       1 alloc.go:329] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs=map[IPv4:10.105.31.35]
	I0602 17:13:12.638961       1 alloc.go:329] "allocated clusterIPs" service="kube-system/csi-hostpath-snapshotter" clusterIPs=map[IPv4:10.97.197.249]
	I0602 17:13:15.953610       1 alloc.go:329] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs=map[IPv4:10.104.145.201]
	E0602 17:13:27.043492       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.32.179:443/apis/metrics.k8s.io/v1beta1: Get "https://10.96.32.179:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.96.32.179:443: connect: connection refused
	E0602 17:13:27.046992       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.32.179:443/apis/metrics.k8s.io/v1beta1: Get "https://10.96.32.179:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.96.32.179:443: connect: connection refused
	E0602 17:13:27.051066       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.32.179:443/apis/metrics.k8s.io/v1beta1: Get "https://10.96.32.179:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.96.32.179:443: connect: connection refused
	E0602 17:13:27.071819       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.32.179:443/apis/metrics.k8s.io/v1beta1: Get "https://10.96.32.179:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.96.32.179:443: connect: connection refused
	E0602 17:13:27.134575       1 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.32.179:443/apis/metrics.k8s.io/v1beta1: Get "https://10.96.32.179:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.96.32.179:443: connect: connection refused
	I0602 17:14:02.382085       1 controller.go:611] quota admission added evaluator for: ingresses.networking.k8s.io
	I0602 17:14:02.635805       1 alloc.go:329] "allocated clusterIPs" service="default/nginx" clusterIPs=map[IPv4:10.96.122.130]
	E0602 17:14:03.596267       1 upgradeaware.go:409] Error proxying data from client to backend: read tcp 192.168.49.2:8443->172.17.0.8:34138: read: connection reset by peer
	I0602 17:14:13.267609       1 alloc.go:329] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.104.16.66]
	I0602 17:14:17.597704       1 controller.go:611] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0602 17:14:28.157146       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0602 17:14:33.437903       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-external-health-monitor-controller\" not found]"
	W0602 17:14:41.137795       1 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
	W0602 17:14:41.155942       1 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
	W0602 17:14:41.166860       1 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
	
	* 
	* ==> kube-controller-manager [134019186c6b] <==
	* E0602 17:14:58.006038       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0602 17:15:05.505515       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0602 17:15:05.505561       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 17:15:05.970300       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0602 17:15:05.970360       1 shared_informer.go:247] Caches are synced for garbage collector 
	W0602 17:15:09.707743       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0602 17:15:09.707776       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0602 17:15:11.705712       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0602 17:15:11.705752       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0602 17:15:16.647155       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0602 17:15:16.647190       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0602 17:15:39.054590       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0602 17:15:39.054629       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0602 17:15:39.554058       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0602 17:15:39.554098       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0602 17:16:07.419058       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0602 17:16:07.419103       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0602 17:16:15.933357       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0602 17:16:15.933396       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0602 17:16:37.509914       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0602 17:16:37.509951       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0602 17:16:54.598960       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0602 17:16:54.598996       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0602 17:17:15.100122       1 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0602 17:17:15.100157       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [c55be71fcb23] <==
	* I0602 17:13:06.371700       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0602 17:13:06.371789       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0602 17:13:06.371824       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 17:13:06.396329       1 server_others.go:206] "Using iptables Proxier"
	I0602 17:13:06.396382       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 17:13:06.396394       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 17:13:06.396427       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 17:13:06.396838       1 server.go:656] "Version info" version="v1.23.6"
	I0602 17:13:06.397559       1 config.go:317] "Starting service config controller"
	I0602 17:13:06.397584       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 17:13:06.397569       1 config.go:226] "Starting endpoint slice config controller"
	I0602 17:13:06.397643       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 17:13:06.498763       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0602 17:13:06.498785       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [26b0a45ff666] <==
	* E0602 17:12:50.160358       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0602 17:12:50.160435       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0602 17:12:50.160444       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0602 17:12:50.160539       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0602 17:12:50.160581       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0602 17:12:50.160673       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0602 17:12:50.160690       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0602 17:12:50.160744       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0602 17:12:50.160759       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0602 17:12:50.160816       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0602 17:12:50.160843       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0602 17:12:50.161006       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0602 17:12:50.161054       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0602 17:12:50.235136       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0602 17:12:50.235255       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0602 17:12:51.081584       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0602 17:12:51.081657       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0602 17:12:51.112709       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0602 17:12:51.112750       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0602 17:12:51.246761       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0602 17:12:51.246805       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0602 17:12:51.437417       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0602 17:12:51.437451       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0602 17:12:51.506369       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0602 17:12:54.457224       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-06-02 17:12:38 UTC, end at Thu 2022-06-02 17:17:27 UTC. --
	Jun 02 17:14:41 addons-20220602171222-283122 kubelet[1916]: I0602 17:14:41.371027    1916 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=2b1b763b-76cf-477a-b9e7-2d844fd07c35 path="/var/lib/kubelet/pods/2b1b763b-76cf-477a-b9e7-2d844fd07c35/volumes"
	Jun 02 17:14:41 addons-20220602171222-283122 kubelet[1916]: I0602 17:14:41.371339    1916 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=3c02e95d-cf9d-4a5d-b7fc-b50092c93f79 path="/var/lib/kubelet/pods/3c02e95d-cf9d-4a5d-b7fc-b50092c93f79/volumes"
	Jun 02 17:14:41 addons-20220602171222-283122 kubelet[1916]: I0602 17:14:41.951896    1916 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/registry-test through plugin: invalid network status for"
	Jun 02 17:14:43 addons-20220602171222-283122 kubelet[1916]: I0602 17:14:43.299608    1916 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j25f5\" (UniqueName: \"kubernetes.io/projected/35840b1b-9ca3-4b8f-ae29-29642369f025-kube-api-access-j25f5\") pod \"35840b1b-9ca3-4b8f-ae29-29642369f025\" (UID: \"35840b1b-9ca3-4b8f-ae29-29642369f025\") "
	Jun 02 17:14:43 addons-20220602171222-283122 kubelet[1916]: I0602 17:14:43.299697    1916 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/35840b1b-9ca3-4b8f-ae29-29642369f025-gcp-creds\") pod \"35840b1b-9ca3-4b8f-ae29-29642369f025\" (UID: \"35840b1b-9ca3-4b8f-ae29-29642369f025\") "
	Jun 02 17:14:43 addons-20220602171222-283122 kubelet[1916]: I0602 17:14:43.299794    1916 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35840b1b-9ca3-4b8f-ae29-29642369f025-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "35840b1b-9ca3-4b8f-ae29-29642369f025" (UID: "35840b1b-9ca3-4b8f-ae29-29642369f025"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jun 02 17:14:43 addons-20220602171222-283122 kubelet[1916]: I0602 17:14:43.302088    1916 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35840b1b-9ca3-4b8f-ae29-29642369f025-kube-api-access-j25f5" (OuterVolumeSpecName: "kube-api-access-j25f5") pod "35840b1b-9ca3-4b8f-ae29-29642369f025" (UID: "35840b1b-9ca3-4b8f-ae29-29642369f025"). InnerVolumeSpecName "kube-api-access-j25f5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 02 17:14:43 addons-20220602171222-283122 kubelet[1916]: I0602 17:14:43.400070    1916 reconciler.go:300] "Volume detached for volume \"kube-api-access-j25f5\" (UniqueName: \"kubernetes.io/projected/35840b1b-9ca3-4b8f-ae29-29642369f025-kube-api-access-j25f5\") on node \"addons-20220602171222-283122\" DevicePath \"\""
	Jun 02 17:14:43 addons-20220602171222-283122 kubelet[1916]: I0602 17:14:43.400132    1916 reconciler.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/35840b1b-9ca3-4b8f-ae29-29642369f025-gcp-creds\") on node \"addons-20220602171222-283122\" DevicePath \"\""
	Jun 02 17:14:43 addons-20220602171222-283122 kubelet[1916]: I0602 17:14:43.983808    1916 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="09f3082fe78d59f19d3c07bd3cbeffcd069730b14ecca30f163c80a2218a8f4b"
	Jun 02 17:14:45 addons-20220602171222-283122 kubelet[1916]: I0602 17:14:45.365944    1916 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=35840b1b-9ca3-4b8f-ae29-29642369f025 path="/var/lib/kubelet/pods/35840b1b-9ca3-4b8f-ae29-29642369f025/volumes"
	Jun 02 17:14:53 addons-20220602171222-283122 kubelet[1916]: I0602 17:14:53.278448    1916 scope.go:110] "RemoveContainer" containerID="42d6c6a0ebce05e365fa493a091a6007e96a63c6071648c8b3db63826f4ed754"
	Jun 02 17:14:53 addons-20220602171222-283122 kubelet[1916]: I0602 17:14:53.290791    1916 scope.go:110] "RemoveContainer" containerID="6ef39524f80cf16a7c6913a4643b4f7ebec0cac1625a0fe1682346cde51a3a39"
	Jun 02 17:14:53 addons-20220602171222-283122 kubelet[1916]: I0602 17:14:53.302426    1916 scope.go:110] "RemoveContainer" containerID="0e4694b2a776f2cb00087add1454c2898c813a12c96d78b4a70b064b6f8e7176"
	Jun 02 17:14:53 addons-20220602171222-283122 kubelet[1916]: I0602 17:14:53.315448    1916 scope.go:110] "RemoveContainer" containerID="bb5152c315c06605698147b79e92927a0434d99d9d624a811e54dff29cb432c5"
	Jun 02 17:14:53 addons-20220602171222-283122 kubelet[1916]: I0602 17:14:53.327432    1916 scope.go:110] "RemoveContainer" containerID="fac063d92525eb6db0de3408fb2c7687892b278830a97d2eb2d0aac19dde81dd"
	Jun 02 17:14:53 addons-20220602171222-283122 kubelet[1916]: I0602 17:14:53.339380    1916 scope.go:110] "RemoveContainer" containerID="5d2ccf596bb155af54c02e186a911216514797f0d189213a9b39e51e98b0ffa7"
	Jun 02 17:17:26 addons-20220602171222-283122 kubelet[1916]: I0602 17:17:26.471330    1916 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcc9c\" (UniqueName: \"kubernetes.io/projected/164ddabb-ed64-4c3b-b6d0-febaab2c0a84-kube-api-access-zcc9c\") pod \"164ddabb-ed64-4c3b-b6d0-febaab2c0a84\" (UID: \"164ddabb-ed64-4c3b-b6d0-febaab2c0a84\") "
	Jun 02 17:17:26 addons-20220602171222-283122 kubelet[1916]: I0602 17:17:26.473486    1916 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/164ddabb-ed64-4c3b-b6d0-febaab2c0a84-kube-api-access-zcc9c" (OuterVolumeSpecName: "kube-api-access-zcc9c") pod "164ddabb-ed64-4c3b-b6d0-febaab2c0a84" (UID: "164ddabb-ed64-4c3b-b6d0-febaab2c0a84"). InnerVolumeSpecName "kube-api-access-zcc9c". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 02 17:17:26 addons-20220602171222-283122 kubelet[1916]: I0602 17:17:26.571947    1916 reconciler.go:300] "Volume detached for volume \"kube-api-access-zcc9c\" (UniqueName: \"kubernetes.io/projected/164ddabb-ed64-4c3b-b6d0-febaab2c0a84-kube-api-access-zcc9c\") on node \"addons-20220602171222-283122\" DevicePath \"\""
	Jun 02 17:17:27 addons-20220602171222-283122 kubelet[1916]: I0602 17:17:27.247220    1916 scope.go:110] "RemoveContainer" containerID="c705ad68ab9cb657931fdda31e90b636eea6547aac1744da547490a211711bd9"
	Jun 02 17:17:27 addons-20220602171222-283122 kubelet[1916]: I0602 17:17:27.262746    1916 scope.go:110] "RemoveContainer" containerID="c705ad68ab9cb657931fdda31e90b636eea6547aac1744da547490a211711bd9"
	Jun 02 17:17:27 addons-20220602171222-283122 kubelet[1916]: E0602 17:17:27.263634    1916 remote_runtime.go:572] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: c705ad68ab9cb657931fdda31e90b636eea6547aac1744da547490a211711bd9" containerID="c705ad68ab9cb657931fdda31e90b636eea6547aac1744da547490a211711bd9"
	Jun 02 17:17:27 addons-20220602171222-283122 kubelet[1916]: I0602 17:17:27.263709    1916 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:docker ID:c705ad68ab9cb657931fdda31e90b636eea6547aac1744da547490a211711bd9} err="failed to get container status \"c705ad68ab9cb657931fdda31e90b636eea6547aac1744da547490a211711bd9\": rpc error: code = Unknown desc = Error: No such container: c705ad68ab9cb657931fdda31e90b636eea6547aac1744da547490a211711bd9"
	Jun 02 17:17:27 addons-20220602171222-283122 kubelet[1916]: I0602 17:17:27.367103    1916 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=164ddabb-ed64-4c3b-b6d0-febaab2c0a84 path="/var/lib/kubelet/pods/164ddabb-ed64-4c3b-b6d0-febaab2c0a84/volumes"
	
	* 
	* ==> storage-provisioner [7ceac803abc2] <==
	* I0602 17:13:12.247822       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0602 17:13:12.256704       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0602 17:13:12.256798       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0602 17:13:12.345480       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0602 17:13:12.345707       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20220602171222-283122_06919a51-ae58-459d-bcb6-1210418d49c3!
	I0602 17:13:12.435785       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ce95e458-1bc7-4e41-aa78-43991aaaf333", APIVersion:"v1", ResourceVersion:"752", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20220602171222-283122_06919a51-ae58-459d-bcb6-1210418d49c3 became leader
	I0602 17:13:12.549986       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20220602171222-283122_06919a51-ae58-459d-bcb6-1210418d49c3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-20220602171222-283122 -n addons-20220602171222-283122
helpers_test.go:261: (dbg) Run:  kubectl --context addons-20220602171222-283122 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context addons-20220602171222-283122 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context addons-20220602171222-283122 describe pod : exit status 1 (46.037312ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context addons-20220602171222-283122 describe pod : exit status 1
--- FAIL: TestAddons/parallel/Registry (212.85s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (2.55s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220602171905-283122 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:817: etcd phase: Running
functional_test.go:825: etcd is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2022-06-02 17:19:30 +0000 UTC ContainerStatuses:[{Name:etcd State:{Waiting:<nil> Running:0xc01d07d920 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc0006d0000} Ready:false RestartCount:1 Image:k8s.gcr.io/etcd:3.5.1-0 ImageID:docker-pullable://k8s.gcr.io/etcd@sha256:64b9ea357325d5db9f8a723dcf503b5a449177b17ac87d69481e126bb724c263 ContainerID:docker://6280eaa54ed8bd5927d6daa34a455a0da36d72ec0981b62af7b5a6e803c6936d}]}
functional_test.go:817: kube-apiserver phase: Running
functional_test.go:825: kube-apiserver is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2022-06-02 17:19:30 +0000 UTC ContainerStatuses:[{Name:kube-apiserver State:{Waiting:<nil> Running:0xc01d07db90 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc0006d0070} Ready:false RestartCount:1 Image:k8s.gcr.io/kube-apiserver:v1.23.6 ImageID:docker-pullable://k8s.gcr.io/kube-apiserver@sha256:0cd8c0bed8d89d914ee5df41e8a40112fb0a28804429c7964296abedc94da9f1 ContainerID:docker://1c227889c513eaee118d03c9e4bf3c1e86d411fafa8eab23a2d54f6813fe4491}]}
functional_test.go:817: kube-controller-manager phase: Running
functional_test.go:825: kube-controller-manager is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2022-06-02 17:19:30 +0000 UTC ContainerStatuses:[{Name:kube-controller-manager State:{Waiting:<nil> Running:0xc01d07de00 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc0006d00e0} Ready:false RestartCount:1 Image:k8s.gcr.io/kube-controller-manager:v1.23.6 ImageID:docker-pullable://k8s.gcr.io/kube-controller-manager@sha256:df94796b78d2285ffe6b231c2b39d25034dde8814de2f75d953a827e77fe6adf ContainerID:docker://020a23185b37e9d6572d843148f5fb0404419a272aa618237ad58288b3e971bd}]}
functional_test.go:817: kube-scheduler phase: Running
functional_test.go:825: kube-scheduler is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2022-06-02 17:19:30 +0000 UTC ContainerStatuses:[{Name:kube-scheduler State:{Waiting:<nil> Running:0xc01d07dfc8 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc0006d0150} Ready:false RestartCount:1 Image:k8s.gcr.io/kube-scheduler:v1.23.6 ImageID:docker-pullable://k8s.gcr.io/kube-scheduler@sha256:02b4e994459efa49c3e2392733e269893e23d4ac46e92e94107652963caae78b ContainerID:docker://5cabfd9de847aaaf1be1fd7b1f0d79d72f168a177c9a16adccf2e6647c8e9f1a}]}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220602171905-283122
helpers_test.go:235: (dbg) docker inspect functional-20220602171905-283122:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ccf73bf4d78c7b204c2b467200b9fa8e02dddb5e9163040ed879a288e345c9be",
	        "Created": "2022-06-02T17:19:13.281758205Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 306853,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T17:19:13.665348671Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/ccf73bf4d78c7b204c2b467200b9fa8e02dddb5e9163040ed879a288e345c9be/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ccf73bf4d78c7b204c2b467200b9fa8e02dddb5e9163040ed879a288e345c9be/hostname",
	        "HostsPath": "/var/lib/docker/containers/ccf73bf4d78c7b204c2b467200b9fa8e02dddb5e9163040ed879a288e345c9be/hosts",
	        "LogPath": "/var/lib/docker/containers/ccf73bf4d78c7b204c2b467200b9fa8e02dddb5e9163040ed879a288e345c9be/ccf73bf4d78c7b204c2b467200b9fa8e02dddb5e9163040ed879a288e345c9be-json.log",
	        "Name": "/functional-20220602171905-283122",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20220602171905-283122:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20220602171905-283122",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/86fe0ac079b421c9dbe7f7740293c7eec9418fad08adbe6a17a004a9f4752e8c-init/diff:/var/lib/docker/overlay2/9517781e97ae29bbe1cf38381a601e806342524da1c5d5dc15ba54ec17f204b6/diff:/var/lib/docker/overlay2/28e14d8724bd82b9b6adde55b66e6d1f1be4fa6c3379f2f44ca1138dd648175a/diff:/var/lib/docker/overlay2/24589dfb9ddc941e7cb22b5e38dec69d1af0a29a330d344152ca7f26010354a8/diff:/var/lib/docker/overlay2/66c45d6273a3507b6e0b6b0753ec4fc82f07160fd115626e24a42bda27bbba86/diff:/var/lib/docker/overlay2/a6f0c661bd178a2a0fef034493a114067e8952b8516e69aff1e1e94a75918bbc/diff:/var/lib/docker/overlay2/5b40a045506372be84bfb01d05b068bc4cc926c5eb9e868226c4c386e8a331c7/diff:/var/lib/docker/overlay2/dbf9a9c6e59628984c95ea93ff4ada66c53b22ca3c2f910f70efba895cf40718/diff:/var/lib/docker/overlay2/5eea619393ee989a35a37a13b0633685af91729e1b74f6e73fd94b63c9d7d1fa/diff:/var/lib/docker/overlay2/24de6fbece5386c3658fa3e2c01723e6c1c24fb837d53b3d5a8c75b64ce2cd06/diff:/var/lib/docker/overlay2/04593e
f7d3bf72ee8b7439b0326af5b9818930e0dd48c35dba97e3b4ae39f4ee/diff:/var/lib/docker/overlay2/09310697cecb557085bb4d5dfc810a82fc533759ad98179263b2fc94153a64fa/diff:/var/lib/docker/overlay2/ee75c420f0615a19a5d44cfe042bdca5a2cf0f1c541168ba424140cf2aa7f98d/diff:/var/lib/docker/overlay2/fc8643a3e2060ab0365cd1227ab9ebbd5a573e6ebf8379bc6b64a756a1f66562/diff:/var/lib/docker/overlay2/0f8571cab87b35114fab495cbde9af8ac7e58be99412da82a79c90239e2227e1/diff:/var/lib/docker/overlay2/e1a5ae079e4f566081e5e786955257c23809c8719ea6b4593c8c91ac00380379/diff:/var/lib/docker/overlay2/373565e63dec12c85906e80b4a030f7d852992a756749be9424ac17802d589c0/diff:/var/lib/docker/overlay2/b886b42aa5e9f06b4381a6f58d55338774a2d02af2e91e3de156aa4bca4e4be0/diff:/var/lib/docker/overlay2/dee534a744ce54d08ca752b7f64e1772c063d4ba1e008c5a9773188bb009c377/diff:/var/lib/docker/overlay2/c8cf6667f29ffc5417675e4bd8eee6a91468de13292e3cb6ae2c70d3ad56d5fd/diff:/var/lib/docker/overlay2/072b57a4228c3928bdb27162809b102b485bb94c5f726c9a3e9892267ed30839/diff:/var/lib/d
ocker/overlay2/491d5b51bf6de1cfc4eef288c454f54e3b424e3b83fd9dc216c35891c7923f55/diff:/var/lib/docker/overlay2/78f167c2cc286e4058ef09d5a719437afddde854aac41b8d054567865fa61024/diff:/var/lib/docker/overlay2/10763b3ba26983ed9426ccca08bfceb7037d95c69827847e2ff755b4f2c5a4ab/diff:/var/lib/docker/overlay2/e559098efb21164d96bd0aee50fee9f48310df68887c5c884f4ba3f3fc895260/diff:/var/lib/docker/overlay2/555ef1505fe2b1ab4244f073e9212f269d70d8cd6ca0cd517d21cc80cd25b4c1/diff:/var/lib/docker/overlay2/0201da7c907a1f8f564cda48c3f3a403e484d901739d2584f1343a7907685c5e/diff:/var/lib/docker/overlay2/0abae5d84f8497a5114349421415431c43a34015ad98f6c9c3140143d2f1f383/diff:/var/lib/docker/overlay2/80c37982eef2dc00bd08f208551df377570134af84008156f6a500855fd2ba10/diff:/var/lib/docker/overlay2/cc38481bb11be97cd54c0032709337a491580c2f0ae6d3f7eec4c3616f8e3126/diff:/var/lib/docker/overlay2/dda978aeb4c7317bafbc077229c4417a4d8e7658d9604803c401448b471eab5e/diff:/var/lib/docker/overlay2/7c230ae0970689e2932cdaadc9e3f86ec8215fbf384c9eb59cfb958daf7
3197e/diff:/var/lib/docker/overlay2/145db51b169211e46c3bbab5ca998b2a019a4a813801acc2dae9b2dc09bc2ecc/diff:/var/lib/docker/overlay2/e604163672a013549c59bc042d1b932a3dad9e104d7079fbbb3ca24c29351c0d/diff:/var/lib/docker/overlay2/5a304d8680fd3fb594754173a70646923e2715ed21c98b8c43e1f1ef2d7abd6a/diff:/var/lib/docker/overlay2/b77f00351ddd70be2f3d9dd622b78e1a2937c4ba71585b357fef88d285eb762e/diff:/var/lib/docker/overlay2/196b6ae042b9b08748253c16f5e2b56d7f9a25077008d3d447cbc75447ed5e2a/diff:/var/lib/docker/overlay2/8c7c3fbb0bed8f7440d11104257806c70d16fd7d432311fc208d5012a439e5a1/diff:/var/lib/docker/overlay2/10604aa3b8ed5698c9fa8f55e00b9a9dc27d6b99135255d6c756fc080e615fc6/diff:/var/lib/docker/overlay2/c6602a17e9b0ab1d84297109204b48e60f293952fbc1feaf362895ec86860ead/diff:/var/lib/docker/overlay2/3828c5e50db9be2b4bc30fce040392ea735da5eaa819ad0848cb4fb79fc367da/diff:/var/lib/docker/overlay2/708816f20892c0e4b94cbe8e80b169fff54ffd870bcde395bd8163dd03b25d0f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/86fe0ac079b421c9dbe7f7740293c7eec9418fad08adbe6a17a004a9f4752e8c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/86fe0ac079b421c9dbe7f7740293c7eec9418fad08adbe6a17a004a9f4752e8c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/86fe0ac079b421c9dbe7f7740293c7eec9418fad08adbe6a17a004a9f4752e8c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20220602171905-283122",
	                "Source": "/var/lib/docker/volumes/functional-20220602171905-283122/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20220602171905-283122",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20220602171905-283122",
	                "name.minikube.sigs.k8s.io": "functional-20220602171905-283122",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0b577a7c7c0f72102b1d1c7e48acd0a27d4af6fac311a08cd9abd09a1ffd9224",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49457"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49456"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49453"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49455"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49454"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0b577a7c7c0f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20220602171905-283122": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ccf73bf4d78c",
	                        "functional-20220602171905-283122"
	                    ],
	                    "NetworkID": "b21549cf349aba8b9852bb7975ba376ac9fe089fee8b7f76e4abbd8a3c8aa318",
	                    "EndpointID": "1dd452a0f5b2f58c8dec0c5a585634fa07b5e2ec3756cb8b90f9991f5515fd22",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-20220602171905-283122 -n functional-20220602171905-283122
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-20220602171905-283122 logs -n 25: (1.44604093s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------|----------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                                   Args                                   |             Profile              |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------------------|---------|----------------|---------------------|---------------------|
	| pause   | nospam-20220602171820-283122                                             | nospam-20220602171820-283122     | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:18 UTC | 02 Jun 22 17:18 UTC |
	|         | --log_dir                                                                |                                  |         |                |                     |                     |
	|         | /tmp/nospam-20220602171820-283122                                        |                                  |         |                |                     |                     |
	|         | pause                                                                    |                                  |         |                |                     |                     |
	| unpause | nospam-20220602171820-283122                                             | nospam-20220602171820-283122     | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:18 UTC | 02 Jun 22 17:18 UTC |
	|         | --log_dir                                                                |                                  |         |                |                     |                     |
	|         | /tmp/nospam-20220602171820-283122                                        |                                  |         |                |                     |                     |
	|         | unpause                                                                  |                                  |         |                |                     |                     |
	| unpause | nospam-20220602171820-283122                                             | nospam-20220602171820-283122     | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:18 UTC | 02 Jun 22 17:18 UTC |
	|         | --log_dir                                                                |                                  |         |                |                     |                     |
	|         | /tmp/nospam-20220602171820-283122                                        |                                  |         |                |                     |                     |
	|         | unpause                                                                  |                                  |         |                |                     |                     |
	| unpause | nospam-20220602171820-283122                                             | nospam-20220602171820-283122     | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:18 UTC | 02 Jun 22 17:18 UTC |
	|         | --log_dir                                                                |                                  |         |                |                     |                     |
	|         | /tmp/nospam-20220602171820-283122                                        |                                  |         |                |                     |                     |
	|         | unpause                                                                  |                                  |         |                |                     |                     |
	| stop    | nospam-20220602171820-283122                                             | nospam-20220602171820-283122     | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:18 UTC | 02 Jun 22 17:19 UTC |
	|         | --log_dir                                                                |                                  |         |                |                     |                     |
	|         | /tmp/nospam-20220602171820-283122                                        |                                  |         |                |                     |                     |
	|         | stop                                                                     |                                  |         |                |                     |                     |
	| stop    | nospam-20220602171820-283122                                             | nospam-20220602171820-283122     | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:19 UTC | 02 Jun 22 17:19 UTC |
	|         | --log_dir                                                                |                                  |         |                |                     |                     |
	|         | /tmp/nospam-20220602171820-283122                                        |                                  |         |                |                     |                     |
	|         | stop                                                                     |                                  |         |                |                     |                     |
	| stop    | nospam-20220602171820-283122                                             | nospam-20220602171820-283122     | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:19 UTC | 02 Jun 22 17:19 UTC |
	|         | --log_dir                                                                |                                  |         |                |                     |                     |
	|         | /tmp/nospam-20220602171820-283122                                        |                                  |         |                |                     |                     |
	|         | stop                                                                     |                                  |         |                |                     |                     |
	| delete  | -p                                                                       | nospam-20220602171820-283122     | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:19 UTC | 02 Jun 22 17:19 UTC |
	|         | nospam-20220602171820-283122                                             |                                  |         |                |                     |                     |
	| start   | -p                                                                       | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:19 UTC | 02 Jun 22 17:19 UTC |
	|         | functional-20220602171905-283122                                         |                                  |         |                |                     |                     |
	|         | --memory=4000                                                            |                                  |         |                |                     |                     |
	|         | --apiserver-port=8441                                                    |                                  |         |                |                     |                     |
	|         | --wait=all --driver=docker                                               |                                  |         |                |                     |                     |
	|         | --container-runtime=docker                                               |                                  |         |                |                     |                     |
	| start   | -p                                                                       | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:19 UTC | 02 Jun 22 17:23 UTC |
	|         | functional-20220602171905-283122                                         |                                  |         |                |                     |                     |
	|         | --alsologtostderr -v=8                                                   |                                  |         |                |                     |                     |
	| cache   | functional-20220602171905-283122                                         | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:23 UTC | 02 Jun 22 17:23 UTC |
	|         | cache add k8s.gcr.io/pause:3.1                                           |                                  |         |                |                     |                     |
	| cache   | functional-20220602171905-283122                                         | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:23 UTC | 02 Jun 22 17:24 UTC |
	|         | cache add k8s.gcr.io/pause:3.3                                           |                                  |         |                |                     |                     |
	| cache   | functional-20220602171905-283122                                         | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|         | cache add                                                                |                                  |         |                |                     |                     |
	|         | k8s.gcr.io/pause:latest                                                  |                                  |         |                |                     |                     |
	| cache   | functional-20220602171905-283122 cache add                               | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|         | minikube-local-cache-test:functional-20220602171905-283122               |                                  |         |                |                     |                     |
	| cache   | functional-20220602171905-283122 cache delete                            | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|         | minikube-local-cache-test:functional-20220602171905-283122               |                                  |         |                |                     |                     |
	| cache   | delete k8s.gcr.io/pause:3.3                                              | minikube                         | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	| cache   | list                                                                     | minikube                         | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	| ssh     | functional-20220602171905-283122                                         | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|         | ssh sudo crictl images                                                   |                                  |         |                |                     |                     |
	| ssh     | functional-20220602171905-283122                                         | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|         | ssh sudo docker rmi                                                      |                                  |         |                |                     |                     |
	|         | k8s.gcr.io/pause:latest                                                  |                                  |         |                |                     |                     |
	| cache   | functional-20220602171905-283122                                         | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|         | cache reload                                                             |                                  |         |                |                     |                     |
	| ssh     | functional-20220602171905-283122                                         | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|         | ssh sudo crictl inspecti                                                 |                                  |         |                |                     |                     |
	|         | k8s.gcr.io/pause:latest                                                  |                                  |         |                |                     |                     |
	| cache   | delete k8s.gcr.io/pause:3.1                                              | minikube                         | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	| cache   | delete k8s.gcr.io/pause:latest                                           | minikube                         | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	| kubectl | functional-20220602171905-283122                                         | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|         | kubectl -- --context                                                     |                                  |         |                |                     |                     |
	|         | functional-20220602171905-283122                                         |                                  |         |                |                     |                     |
	|         | get pods                                                                 |                                  |         |                |                     |                     |
	| start   | -p functional-20220602171905-283122                                      | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                                  |         |                |                     |                     |
	|         | --wait=all                                                               |                                  |         |                |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 17:24:05
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 17:24:05.629505  314124 out.go:296] Setting OutFile to fd 1 ...
	I0602 17:24:05.629735  314124 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:24:05.629739  314124 out.go:309] Setting ErrFile to fd 2...
	I0602 17:24:05.629743  314124 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:24:05.629869  314124 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 17:24:05.630175  314124 out.go:303] Setting JSON to false
	I0602 17:24:05.631241  314124 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7599,"bootTime":1654183047,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0602 17:24:05.631305  314124 start.go:125] virtualization: kvm guest
	I0602 17:24:05.634186  314124 out.go:177] * [functional-20220602171905-283122] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0602 17:24:05.636356  314124 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 17:24:05.636188  314124 notify.go:193] Checking for updates...
	I0602 17:24:05.639932  314124 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 17:24:05.641661  314124 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 17:24:05.643605  314124 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 17:24:05.645381  314124 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0602 17:24:05.647470  314124 config.go:178] Loaded profile config "functional-20220602171905-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 17:24:05.647543  314124 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 17:24:05.688609  314124 docker.go:137] docker version: linux-20.10.16
	I0602 17:24:05.688719  314124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 17:24:05.796298  314124 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:71 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:39 SystemTime:2022-06-02 17:24:05.718997938 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 17:24:05.796447  314124 docker.go:254] overlay module found
	I0602 17:24:05.799041  314124 out.go:177] * Using the docker driver based on existing profile
	I0602 17:24:05.800813  314124 start.go:284] selected driver: docker
	I0602 17:24:05.800824  314124 start.go:806] validating driver "docker" against &{Name:functional-20220602171905-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220602171905-283122 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:t
rue storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 17:24:05.800967  314124 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 17:24:05.801206  314124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 17:24:05.909549  314124 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:71 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:39 SystemTime:2022-06-02 17:24:05.832134804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 17:24:05.910091  314124 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0602 17:24:05.910109  314124 cni.go:95] Creating CNI manager for ""
	I0602 17:24:05.910116  314124 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 17:24:05.910129  314124 start_flags.go:306] config:
	{Name:functional-20220602171905-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220602171905-283122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:tr
ue storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 17:24:05.913347  314124 out.go:177] * Starting control plane node functional-20220602171905-283122 in cluster functional-20220602171905-283122
	I0602 17:24:05.914822  314124 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 17:24:05.916501  314124 out.go:177] * Pulling base image ...
	I0602 17:24:05.917886  314124 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 17:24:05.917934  314124 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 17:24:05.917942  314124 cache.go:57] Caching tarball of preloaded images
	I0602 17:24:05.917979  314124 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 17:24:05.918178  314124 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 17:24:05.918189  314124 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 17:24:05.918642  314124 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/config.json ...
	I0602 17:24:05.966243  314124 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 17:24:05.966262  314124 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 17:24:05.966271  314124 cache.go:206] Successfully downloaded all kic artifacts
	I0602 17:24:05.966342  314124 start.go:352] acquiring machines lock for functional-20220602171905-283122: {Name:mk661f347a5814dfb9c8b0ea74c2401c262e8064 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 17:24:05.966461  314124 start.go:356] acquired machines lock for "functional-20220602171905-283122" in 99.45µs
	I0602 17:24:05.966478  314124 start.go:94] Skipping create...Using existing machine configuration
	I0602 17:24:05.966488  314124 fix.go:55] fixHost starting: 
	I0602 17:24:05.966722  314124 cli_runner.go:164] Run: docker container inspect functional-20220602171905-283122 --format={{.State.Status}}
	I0602 17:24:06.001543  314124 fix.go:103] recreateIfNeeded on functional-20220602171905-283122: state=Running err=<nil>
	W0602 17:24:06.001587  314124 fix.go:129] unexpected machine state, will restart: <nil>
	I0602 17:24:06.004061  314124 out.go:177] * Updating the running docker "functional-20220602171905-283122" container ...
	I0602 17:24:06.005661  314124 machine.go:88] provisioning docker machine ...
	I0602 17:24:06.005687  314124 ubuntu.go:169] provisioning hostname "functional-20220602171905-283122"
	I0602 17:24:06.005749  314124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220602171905-283122
	I0602 17:24:06.039885  314124 main.go:134] libmachine: Using SSH client type: native
	I0602 17:24:06.040059  314124 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49457 <nil> <nil>}
	I0602 17:24:06.040068  314124 main.go:134] libmachine: About to run SSH command:
	sudo hostname functional-20220602171905-283122 && echo "functional-20220602171905-283122" | sudo tee /etc/hostname
	I0602 17:24:06.169927  314124 main.go:134] libmachine: SSH cmd err, output: <nil>: functional-20220602171905-283122
	
	I0602 17:24:06.170003  314124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220602171905-283122
	I0602 17:24:06.204945  314124 main.go:134] libmachine: Using SSH client type: native
	I0602 17:24:06.205150  314124 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49457 <nil> <nil>}
	I0602 17:24:06.205168  314124 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-20220602171905-283122' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-20220602171905-283122/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-20220602171905-283122' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 17:24:06.321039  314124 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 17:24:06.321062  314124 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem
ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 17:24:06.321086  314124 ubuntu.go:177] setting up certificates
	I0602 17:24:06.321096  314124 provision.go:83] configureAuth start
	I0602 17:24:06.321153  314124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20220602171905-283122
	I0602 17:24:06.355525  314124 provision.go:138] copyHostCerts
	I0602 17:24:06.355583  314124 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
	I0602 17:24:06.355599  314124 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 17:24:06.355661  314124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 17:24:06.355749  314124 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
	I0602 17:24:06.355757  314124 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 17:24:06.355780  314124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 17:24:06.355843  314124 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
	I0602 17:24:06.355847  314124 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 17:24:06.355867  314124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1679 bytes)
	I0602 17:24:06.355904  314124 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.functional-20220602171905-283122 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-20220602171905-283122]
	I0602 17:24:06.483549  314124 provision.go:172] copyRemoteCerts
	I0602 17:24:06.483597  314124 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 17:24:06.483634  314124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220602171905-283122
	I0602 17:24:06.518344  314124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49457 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/functional-20220602171905-283122/id_rsa Username:docker}
	I0602 17:24:06.608913  314124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0602 17:24:06.627710  314124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 17:24:06.646692  314124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0602 17:24:06.664635  314124 provision.go:86] duration metric: configureAuth took 343.521652ms
	I0602 17:24:06.664654  314124 ubuntu.go:193] setting minikube options for container-runtime
	I0602 17:24:06.664848  314124 config.go:178] Loaded profile config "functional-20220602171905-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 17:24:06.664897  314124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220602171905-283122
	I0602 17:24:06.698361  314124 main.go:134] libmachine: Using SSH client type: native
	I0602 17:24:06.698545  314124 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49457 <nil> <nil>}
	I0602 17:24:06.698557  314124 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 17:24:06.813514  314124 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 17:24:06.813531  314124 ubuntu.go:71] root file system type: overlay
	I0602 17:24:06.813731  314124 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 17:24:06.813787  314124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220602171905-283122
	I0602 17:24:06.848499  314124 main.go:134] libmachine: Using SSH client type: native
	I0602 17:24:06.848640  314124 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49457 <nil> <nil>}
	I0602 17:24:06.848710  314124 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 17:24:06.974649  314124 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 17:24:06.974715  314124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220602171905-283122
	I0602 17:24:07.008539  314124 main.go:134] libmachine: Using SSH client type: native
	I0602 17:24:07.008675  314124 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49457 <nil> <nil>}
	I0602 17:24:07.008687  314124 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 17:24:07.129005  314124 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 17:24:07.129039  314124 machine.go:91] provisioned docker machine in 1.12336661s
	I0602 17:24:07.129050  314124 start.go:306] post-start starting for "functional-20220602171905-283122" (driver="docker")
	I0602 17:24:07.129058  314124 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 17:24:07.129122  314124 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 17:24:07.129159  314124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220602171905-283122
	I0602 17:24:07.163496  314124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49457 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/functional-20220602171905-283122/id_rsa Username:docker}
	I0602 17:24:07.248730  314124 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 17:24:07.251587  314124 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 17:24:07.251600  314124 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 17:24:07.251607  314124 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 17:24:07.251613  314124 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 17:24:07.251621  314124 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 17:24:07.251677  314124 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 17:24:07.251753  314124 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem -> 2831222.pem in /etc/ssl/certs
	I0602 17:24:07.251806  314124 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/test/nested/copy/283122/hosts -> hosts in /etc/test/nested/copy/283122
	I0602 17:24:07.251834  314124 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/283122
	I0602 17:24:07.258996  314124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem --> /etc/ssl/certs/2831222.pem (1708 bytes)
	I0602 17:24:07.277711  314124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/test/nested/copy/283122/hosts --> /etc/test/nested/copy/283122/hosts (40 bytes)
	I0602 17:24:07.296018  314124 start.go:309] post-start completed in 166.950663ms
	I0602 17:24:07.296086  314124 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 17:24:07.296119  314124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220602171905-283122
	I0602 17:24:07.330181  314124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49457 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/functional-20220602171905-283122/id_rsa Username:docker}
	I0602 17:24:07.417966  314124 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 17:24:07.422429  314124 fix.go:57] fixHost completed within 1.455932654s
	I0602 17:24:07.422459  314124 start.go:81] releasing machines lock for "functional-20220602171905-283122", held for 1.455980885s
	I0602 17:24:07.422577  314124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20220602171905-283122
	I0602 17:24:07.456271  314124 ssh_runner.go:195] Run: systemctl --version
	I0602 17:24:07.456310  314124 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 17:24:07.456315  314124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220602171905-283122
	I0602 17:24:07.456389  314124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220602171905-283122
	I0602 17:24:07.493317  314124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49457 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/functional-20220602171905-283122/id_rsa Username:docker}
	I0602 17:24:07.494506  314124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49457 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/functional-20220602171905-283122/id_rsa Username:docker}
	I0602 17:24:07.595771  314124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 17:24:07.605928  314124 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 17:24:07.615819  314124 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 17:24:07.615874  314124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 17:24:07.625617  314124 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 17:24:07.639385  314124 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 17:24:07.741763  314124 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 17:24:07.844876  314124 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 17:24:07.855154  314124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 17:24:07.949251  314124 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 17:24:07.959708  314124 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 17:24:08.000341  314124 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 17:24:08.043090  314124 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0602 17:24:08.043201  314124 cli_runner.go:164] Run: docker network inspect functional-20220602171905-283122 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 17:24:08.076604  314124 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0602 17:24:08.082740  314124 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0602 17:24:08.084543  314124 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 17:24:08.084610  314124 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 17:24:08.118780  314124 docker.go:610] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-20220602171905-283122
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.3
	k8s.gcr.io/pause:3.1
	k8s.gcr.io/pause:latest
	
	-- /stdout --
	I0602 17:24:08.118797  314124 docker.go:541] Images already preloaded, skipping extraction
	I0602 17:24:08.118861  314124 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 17:24:08.153687  314124 docker.go:610] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-20220602171905-283122
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.3
	k8s.gcr.io/pause:3.1
	k8s.gcr.io/pause:latest
	
	-- /stdout --
	I0602 17:24:08.153705  314124 cache_images.go:84] Images are preloaded, skipping loading
	I0602 17:24:08.153755  314124 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 17:24:08.241083  314124 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0602 17:24:08.241124  314124 cni.go:95] Creating CNI manager for ""
	I0602 17:24:08.241134  314124 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 17:24:08.241142  314124 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 17:24:08.241156  314124 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-20220602171905-283122 NodeName:functional-20220602171905-283122 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 17:24:08.241340  314124 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "functional-20220602171905-283122"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 17:24:08.241430  314124 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=functional-20220602171905-283122 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:functional-20220602171905-283122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0602 17:24:08.241479  314124 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0602 17:24:08.249779  314124 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 17:24:08.249943  314124 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 17:24:08.257647  314124 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (358 bytes)
	I0602 17:24:08.271054  314124 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 17:24:08.283848  314124 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1904 bytes)
	I0602 17:24:08.298060  314124 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0602 17:24:08.301395  314124 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122 for IP: 192.168.49.2
	I0602 17:24:08.301503  314124 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
	I0602 17:24:08.301534  314124 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
	I0602 17:24:08.301602  314124 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.key
	I0602 17:24:08.301643  314124 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/apiserver.key.dd3b5fb2
	I0602 17:24:08.301675  314124 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/proxy-client.key
	I0602 17:24:08.301779  314124 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/283122.pem (1338 bytes)
	W0602 17:24:08.301802  314124 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/283122_empty.pem, impossibly tiny 0 bytes
	I0602 17:24:08.301809  314124 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
	I0602 17:24:08.301828  314124 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
	I0602 17:24:08.301845  314124 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
	I0602 17:24:08.301875  314124 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1679 bytes)
	I0602 17:24:08.301907  314124 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem (1708 bytes)
	I0602 17:24:08.302604  314124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 17:24:08.321404  314124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0602 17:24:08.339705  314124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 17:24:08.358934  314124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0602 17:24:08.377897  314124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 17:24:08.398265  314124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0602 17:24:08.417249  314124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 17:24:08.435480  314124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0602 17:24:08.454324  314124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 17:24:08.473218  314124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/283122.pem --> /usr/share/ca-certificates/283122.pem (1338 bytes)
	I0602 17:24:08.491774  314124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem --> /usr/share/ca-certificates/2831222.pem (1708 bytes)
	I0602 17:24:08.510120  314124 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 17:24:08.523775  314124 ssh_runner.go:195] Run: openssl version
	I0602 17:24:08.528880  314124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/283122.pem && ln -fs /usr/share/ca-certificates/283122.pem /etc/ssl/certs/283122.pem"
	I0602 17:24:08.537355  314124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/283122.pem
	I0602 17:24:08.540945  314124 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:19 /usr/share/ca-certificates/283122.pem
	I0602 17:24:08.540997  314124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/283122.pem
	I0602 17:24:08.546060  314124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/283122.pem /etc/ssl/certs/51391683.0"
	I0602 17:24:08.553446  314124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2831222.pem && ln -fs /usr/share/ca-certificates/2831222.pem /etc/ssl/certs/2831222.pem"
	I0602 17:24:08.561416  314124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2831222.pem
	I0602 17:24:08.564651  314124 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:19 /usr/share/ca-certificates/2831222.pem
	I0602 17:24:08.564695  314124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2831222.pem
	I0602 17:24:08.569577  314124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2831222.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 17:24:08.576873  314124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 17:24:08.584951  314124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:24:08.588404  314124 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:24:08.588449  314124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:24:08.593693  314124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 17:24:08.601427  314124 kubeadm.go:395] StartCluster: {Name:functional-20220602171905-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220602171905-283122 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 17:24:08.601588  314124 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 17:24:08.635875  314124 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 17:24:08.643573  314124 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0602 17:24:08.643590  314124 kubeadm.go:626] restartCluster start
	I0602 17:24:08.643639  314124 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0602 17:24:08.651195  314124 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0602 17:24:08.651779  314124 kubeconfig.go:92] found "functional-20220602171905-283122" server: "https://192.168.49.2:8441"
	I0602 17:24:08.652767  314124 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0602 17:24:08.660094  314124 kubeadm.go:593] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2022-06-02 17:19:17.960268140 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2022-06-02 17:24:08.291691629 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0602 17:24:08.660105  314124 kubeadm.go:1092] stopping kube-system containers ...
	I0602 17:24:08.660150  314124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 17:24:08.697132  314124 docker.go:442] Stopping containers: [81600bc50246 4e587023f927 b8aba85da98e 4b794e67eb6e 0544243af64e 895c379f6918 b2cbc18e9595 22f9f15b9a3f 92951e242fba 5353917f3d71 ae98ee0fc94f 3f293bdfbc88 730f8be15784 4fa66a37576f]
	I0602 17:24:08.697197  314124 ssh_runner.go:195] Run: docker stop 81600bc50246 4e587023f927 b8aba85da98e 4b794e67eb6e 0544243af64e 895c379f6918 b2cbc18e9595 22f9f15b9a3f 92951e242fba 5353917f3d71 ae98ee0fc94f 3f293bdfbc88 730f8be15784 4fa66a37576f
	I0602 17:24:13.880810  314124 ssh_runner.go:235] Completed: docker stop 81600bc50246 4e587023f927 b8aba85da98e 4b794e67eb6e 0544243af64e 895c379f6918 b2cbc18e9595 22f9f15b9a3f 92951e242fba 5353917f3d71 ae98ee0fc94f 3f293bdfbc88 730f8be15784 4fa66a37576f: (5.183585991s)
	I0602 17:24:13.880863  314124 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0602 17:24:13.972536  314124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 17:24:13.980100  314124 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun  2 17:19 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jun  2 17:19 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Jun  2 17:19 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jun  2 17:19 /etc/kubernetes/scheduler.conf
	
	I0602 17:24:13.980160  314124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0602 17:24:13.987451  314124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0602 17:24:13.994648  314124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0602 17:24:14.001810  314124 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0602 17:24:14.001865  314124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0602 17:24:14.009232  314124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0602 17:24:14.016557  314124 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0602 17:24:14.016613  314124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0602 17:24:14.023797  314124 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 17:24:14.030965  314124 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0602 17:24:14.030985  314124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 17:24:14.074959  314124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 17:24:14.611475  314124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0602 17:24:14.874774  314124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 17:24:14.966584  314124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0602 17:24:15.060507  314124 api_server.go:51] waiting for apiserver process to appear ...
	I0602 17:24:15.060565  314124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 17:24:15.069742  314124 api_server.go:71] duration metric: took 9.235725ms to wait for apiserver process to appear ...
	I0602 17:24:15.069773  314124 api_server.go:87] waiting for apiserver healthz status ...
	I0602 17:24:15.069786  314124 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0602 17:24:15.074488  314124 api_server.go:266] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0602 17:24:15.081636  314124 api_server.go:140] control plane version: v1.23.6
	I0602 17:24:15.081656  314124 api_server.go:130] duration metric: took 11.876768ms to wait for apiserver health ...
	I0602 17:24:15.081667  314124 cni.go:95] Creating CNI manager for ""
	I0602 17:24:15.081675  314124 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 17:24:15.081684  314124 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 17:24:15.090504  314124 system_pods.go:59] 7 kube-system pods found
	I0602 17:24:15.090525  314124 system_pods.go:61] "coredns-64897985d-fqvms" [6621aa03-5b38-445f-b662-d6838aedd8c1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0602 17:24:15.090529  314124 system_pods.go:61] "etcd-functional-20220602171905-283122" [16e305ca-68dc-4b9a-ad48-e11022ccb3e2] Running
	I0602 17:24:15.090536  314124 system_pods.go:61] "kube-apiserver-functional-20220602171905-283122" [05a32744-7331-4979-8b90-c5b3b18fe049] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0602 17:24:15.090541  314124 system_pods.go:61] "kube-controller-manager-functional-20220602171905-283122" [cc1fee66-dc1d-49bd-af72-33b735fa7634] Running
	I0602 17:24:15.090544  314124 system_pods.go:61] "kube-proxy-x5hdb" [39bcad9e-d675-48d1-924d-ce7c5ab64b2c] Running
	I0602 17:24:15.090549  314124 system_pods.go:61] "kube-scheduler-functional-20220602171905-283122" [06201958-917d-4b61-896b-8eed8874b229] Running
	I0602 17:24:15.090554  314124 system_pods.go:61] "storage-provisioner" [ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 17:24:15.090560  314124 system_pods.go:74] duration metric: took 8.871246ms to wait for pod list to return data ...
	I0602 17:24:15.090566  314124 node_conditions.go:102] verifying NodePressure condition ...
	I0602 17:24:15.094602  314124 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0602 17:24:15.094619  314124 node_conditions.go:123] node cpu capacity is 8
	I0602 17:24:15.094628  314124 node_conditions.go:105] duration metric: took 4.059217ms to run NodePressure ...
	I0602 17:24:15.094648  314124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0602 17:24:15.553542  314124 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0602 17:24:15.558549  314124 kubeadm.go:777] kubelet initialised
	I0602 17:24:15.558563  314124 kubeadm.go:778] duration metric: took 5.001517ms waiting for restarted kubelet to initialise ...
	I0602 17:24:15.558571  314124 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 17:24:15.563859  314124 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-fqvms" in "kube-system" namespace to be "Ready" ...
	I0602 17:24:15.568746  314124 pod_ready.go:97] node "functional-20220602171905-283122" hosting pod "coredns-64897985d-fqvms" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220602171905-283122" has status "Ready":"False"
	I0602 17:24:15.568760  314124 pod_ready.go:81] duration metric: took 4.884366ms waiting for pod "coredns-64897985d-fqvms" in "kube-system" namespace to be "Ready" ...
	E0602 17:24:15.568770  314124 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220602171905-283122" hosting pod "coredns-64897985d-fqvms" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220602171905-283122" has status "Ready":"False"
	I0602 17:24:15.568794  314124 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-20220602171905-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:24:15.573417  314124 pod_ready.go:97] node "functional-20220602171905-283122" hosting pod "etcd-functional-20220602171905-283122" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220602171905-283122" has status "Ready":"False"
	I0602 17:24:15.573433  314124 pod_ready.go:81] duration metric: took 4.632386ms waiting for pod "etcd-functional-20220602171905-283122" in "kube-system" namespace to be "Ready" ...
	E0602 17:24:15.573444  314124 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220602171905-283122" hosting pod "etcd-functional-20220602171905-283122" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220602171905-283122" has status "Ready":"False"
	I0602 17:24:15.573468  314124 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-20220602171905-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:24:15.577970  314124 pod_ready.go:97] node "functional-20220602171905-283122" hosting pod "kube-apiserver-functional-20220602171905-283122" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220602171905-283122" has status "Ready":"False"
	I0602 17:24:15.577984  314124 pod_ready.go:81] duration metric: took 4.507754ms waiting for pod "kube-apiserver-functional-20220602171905-283122" in "kube-system" namespace to be "Ready" ...
	E0602 17:24:15.577996  314124 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220602171905-283122" hosting pod "kube-apiserver-functional-20220602171905-283122" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220602171905-283122" has status "Ready":"False"
	I0602 17:24:15.578027  314124 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-20220602171905-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:24:15.582433  314124 pod_ready.go:97] node "functional-20220602171905-283122" hosting pod "kube-controller-manager-functional-20220602171905-283122" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220602171905-283122" has status "Ready":"False"
	I0602 17:24:15.582451  314124 pod_ready.go:81] duration metric: took 4.415083ms waiting for pod "kube-controller-manager-functional-20220602171905-283122" in "kube-system" namespace to be "Ready" ...
	E0602 17:24:15.582461  314124 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220602171905-283122" hosting pod "kube-controller-manager-functional-20220602171905-283122" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220602171905-283122" has status "Ready":"False"
	I0602 17:24:15.582487  314124 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x5hdb" in "kube-system" namespace to be "Ready" ...
	I0602 17:24:15.957298  314124 pod_ready.go:92] pod "kube-proxy-x5hdb" in "kube-system" namespace has status "Ready":"True"
	I0602 17:24:15.957309  314124 pod_ready.go:81] duration metric: took 374.813792ms waiting for pod "kube-proxy-x5hdb" in "kube-system" namespace to be "Ready" ...
	I0602 17:24:15.957318  314124 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-20220602171905-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:24:16.357051  314124 pod_ready.go:92] pod "kube-scheduler-functional-20220602171905-283122" in "kube-system" namespace has status "Ready":"True"
	I0602 17:24:16.357063  314124 pod_ready.go:81] duration metric: took 399.738612ms waiting for pod "kube-scheduler-functional-20220602171905-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:24:16.357078  314124 pod_ready.go:38] duration metric: took 798.496594ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 17:24:16.357100  314124 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0602 17:24:16.364805  314124 ops.go:34] apiserver oom_adj: -16
	I0602 17:24:16.364820  314124 kubeadm.go:630] restartCluster took 7.721225412s
	I0602 17:24:16.364830  314124 kubeadm.go:397] StartCluster complete in 7.763414857s
	I0602 17:24:16.364850  314124 settings.go:142] acquiring lock: {Name:mkca69c8f6bc293fef8b552d09d771e1f2253f12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:24:16.364972  314124 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 17:24:16.365677  314124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk4aad2ea1df51829b8bb57d56bd4d8e58dc96e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:24:16.368948  314124 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "functional-20220602171905-283122" rescaled to 1
	I0602 17:24:16.369008  314124 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 17:24:16.369031  314124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0602 17:24:16.371620  314124 out.go:177] * Verifying Kubernetes components...
	I0602 17:24:16.369125  314124 addons.go:415] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I0602 17:24:16.369315  314124 config.go:178] Loaded profile config "functional-20220602171905-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 17:24:16.373895  314124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 17:24:16.373903  314124 addons.go:65] Setting storage-provisioner=true in profile "functional-20220602171905-283122"
	I0602 17:24:16.373930  314124 addons.go:153] Setting addon storage-provisioner=true in "functional-20220602171905-283122"
	W0602 17:24:16.373935  314124 addons.go:165] addon storage-provisioner should already be in state true
	I0602 17:24:16.373990  314124 host.go:66] Checking if "functional-20220602171905-283122" exists ...
	I0602 17:24:16.373987  314124 addons.go:65] Setting default-storageclass=true in profile "functional-20220602171905-283122"
	I0602 17:24:16.374086  314124 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-20220602171905-283122"
	I0602 17:24:16.374402  314124 cli_runner.go:164] Run: docker container inspect functional-20220602171905-283122 --format={{.State.Status}}
	I0602 17:24:16.374541  314124 cli_runner.go:164] Run: docker container inspect functional-20220602171905-283122 --format={{.State.Status}}
	I0602 17:24:16.418508  314124 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 17:24:16.420342  314124 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 17:24:16.420356  314124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0602 17:24:16.420407  314124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220602171905-283122
	I0602 17:24:16.438015  314124 addons.go:153] Setting addon default-storageclass=true in "functional-20220602171905-283122"
	W0602 17:24:16.438035  314124 addons.go:165] addon default-storageclass should already be in state true
	I0602 17:24:16.438069  314124 host.go:66] Checking if "functional-20220602171905-283122" exists ...
	I0602 17:24:16.438641  314124 cli_runner.go:164] Run: docker container inspect functional-20220602171905-283122 --format={{.State.Status}}
	I0602 17:24:16.446614  314124 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0602 17:24:16.446617  314124 node_ready.go:35] waiting up to 6m0s for node "functional-20220602171905-283122" to be "Ready" ...
	I0602 17:24:16.457656  314124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49457 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/functional-20220602171905-283122/id_rsa Username:docker}
	I0602 17:24:16.475082  314124 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0602 17:24:16.475097  314124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0602 17:24:16.475161  314124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220602171905-283122
	I0602 17:24:16.511480  314124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49457 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/functional-20220602171905-283122/id_rsa Username:docker}
	I0602 17:24:16.555374  314124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 17:24:16.558613  314124 node_ready.go:49] node "functional-20220602171905-283122" has status "Ready":"True"
	I0602 17:24:16.558624  314124 node_ready.go:38] duration metric: took 111.984583ms waiting for node "functional-20220602171905-283122" to be "Ready" ...
	I0602 17:24:16.558635  314124 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 17:24:16.603709  314124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 17:24:16.759706  314124 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-fqvms" in "kube-system" namespace to be "Ready" ...
	W0602 17:24:17.553877  314124 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error when retrieving current configuration of:
	Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get "https://localhost:8441/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner": dial tcp 127.0.0.1:8441: connect: connection refused
	error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
	Name: "storage-provisioner", Namespace: ""
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get "https://localhost:8441/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-provisioner": dial tcp 127.0.0.1:8441: connect: connection refused
	error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get "https://localhost:8441/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:persistent-volume-provisioner": dial tcp 127.0.0.1:8441: connect: connection refused
	error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=rolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=RoleBinding"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get "https://localhost:8441/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:persistent-volume-provisioner": dial tcp 127.0.0.1:8441: connect: connection refused
	error when retrieving current configuration of:
	Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
	Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get "https://localhost:8441/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 127.0.0.1:8441: connect: connection refused
	error when retrieving current configuration of:
	Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get "https://localhost:8441/api/v1/namespaces/kube-system/pods/storage-provisioner": dial tcp 127.0.0.1:8441: connect: connection refused
	I0602 17:24:17.553905  314124 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error when retrieving current configuration of:
	Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get "https://localhost:8441/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner": dial tcp 127.0.0.1:8441: connect: connection refused
	error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=clusterrolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
	Name: "storage-provisioner", Namespace: ""
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get "https://localhost:8441/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/storage-provisioner": dial tcp 127.0.0.1:8441: connect: connection refused
	error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=Role"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get "https://localhost:8441/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:persistent-volume-provisioner": dial tcp 127.0.0.1:8441: connect: connection refused
	error when retrieving current configuration of:
	Resource: "rbac.authorization.k8s.io/v1, Resource=rolebindings", GroupVersionKind: "rbac.authorization.k8s.io/v1, Kind=RoleBinding"
	Name: "system:persistent-volume-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get "https://localhost:8441/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:persistent-volume-provisioner": dial tcp 127.0.0.1:8441: connect: connection refused
	error when retrieving current configuration of:
	Resource: "/v1, Resource=endpoints", GroupVersionKind: "/v1, Kind=Endpoints"
	Name: "k8s.io-minikube-hostpath", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get "https://localhost:8441/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 127.0.0.1:8441: connect: connection refused
	error when retrieving current configuration of:
	Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
	Name: "storage-provisioner", Namespace: "kube-system"
	from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get "https://localhost:8441/api/v1/namespaces/kube-system/pods/storage-provisioner": dial tcp 127.0.0.1:8441: connect: connection refused
	W0602 17:24:17.553949  314124 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error when retrieving current configuration of:
	Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass"
	Name: "standard", Namespace: ""
	from server for: "/etc/kubernetes/addons/storageclass.yaml": Get "https://localhost:8441/apis/storage.k8s.io/v1/storageclasses/standard": dial tcp 127.0.0.1:8441: connect: connection refused
	I0602 17:24:17.553965  314124 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error when retrieving current configuration of:
	Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass"
	Name: "standard", Namespace: ""
	from server for: "/etc/kubernetes/addons/storageclass.yaml": Get "https://localhost:8441/apis/storage.k8s.io/v1/storageclasses/standard": dial tcp 127.0.0.1:8441: connect: connection refused
	I0602 17:24:17.658299  314124 pod_ready.go:97] error getting pod "coredns-64897985d-fqvms" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-64897985d-fqvms": dial tcp 192.168.49.2:8441: connect: connection refused
	I0602 17:24:17.658328  314124 pod_ready.go:81] duration metric: took 898.603761ms waiting for pod "coredns-64897985d-fqvms" in "kube-system" namespace to be "Ready" ...
	E0602 17:24:17.658341  314124 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-fqvms" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-64897985d-fqvms": dial tcp 192.168.49.2:8441: connect: connection refused
	I0602 17:24:17.658366  314124 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-20220602171905-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:24:17.658674  314124 pod_ready.go:97] error getting pod "etcd-functional-20220602171905-283122" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-20220602171905-283122": dial tcp 192.168.49.2:8441: connect: connection refused
	I0602 17:24:17.658686  314124 pod_ready.go:81] duration metric: took 311.87µs waiting for pod "etcd-functional-20220602171905-283122" in "kube-system" namespace to be "Ready" ...
	E0602 17:24:17.658697  314124 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "etcd-functional-20220602171905-283122" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-20220602171905-283122": dial tcp 192.168.49.2:8441: connect: connection refused
	I0602 17:24:17.658717  314124 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-20220602171905-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:24:17.754432  314124 pod_ready.go:97] error getting pod "kube-apiserver-functional-20220602171905-283122" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-20220602171905-283122": dial tcp 192.168.49.2:8441: connect: connection refused
	I0602 17:24:17.754451  314124 pod_ready.go:81] duration metric: took 95.726754ms waiting for pod "kube-apiserver-functional-20220602171905-283122" in "kube-system" namespace to be "Ready" ...
	E0602 17:24:17.754461  314124 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-functional-20220602171905-283122" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-20220602171905-283122": dial tcp 192.168.49.2:8441: connect: connection refused
	I0602 17:24:17.754483  314124 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-20220602171905-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:24:17.830643  314124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0602 17:24:17.885728  314124 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0602 17:24:17.885751  314124 retry.go:31] will retry after 436.71002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0602 17:24:17.914901  314124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 17:24:17.954909  314124 pod_ready.go:97] error getting pod "kube-controller-manager-functional-20220602171905-283122" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-20220602171905-283122": dial tcp 192.168.49.2:8441: connect: connection refused
	I0602 17:24:17.954929  314124 pod_ready.go:81] duration metric: took 200.438082ms waiting for pod "kube-controller-manager-functional-20220602171905-283122" in "kube-system" namespace to be "Ready" ...
	E0602 17:24:17.954944  314124 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-functional-20220602171905-283122" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-20220602171905-283122": dial tcp 192.168.49.2:8441: connect: connection refused
	I0602 17:24:17.954969  314124 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x5hdb" in "kube-system" namespace to be "Ready" ...
	W0602 17:24:17.961268  314124 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0602 17:24:17.961295  314124 retry.go:31] will retry after 351.64282ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0602 17:24:18.154975  314124 pod_ready.go:97] error getting pod "kube-proxy-x5hdb" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-x5hdb": dial tcp 192.168.49.2:8441: connect: connection refused
	I0602 17:24:18.154993  314124 pod_ready.go:81] duration metric: took 200.015745ms waiting for pod "kube-proxy-x5hdb" in "kube-system" namespace to be "Ready" ...
	E0602 17:24:18.155005  314124 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-x5hdb" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-x5hdb": dial tcp 192.168.49.2:8441: connect: connection refused
	I0602 17:24:18.155038  314124 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-20220602171905-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:24:18.313170  314124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 17:24:18.322606  314124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 17:24:18.354308  314124 pod_ready.go:97] error getting pod "kube-scheduler-functional-20220602171905-283122" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20220602171905-283122": dial tcp 192.168.49.2:8441: connect: connection refused
	I0602 17:24:18.354328  314124 pod_ready.go:81] duration metric: took 199.280754ms waiting for pod "kube-scheduler-functional-20220602171905-283122" in "kube-system" namespace to be "Ready" ...
	E0602 17:24:18.354339  314124 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-20220602171905-283122" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20220602171905-283122": dial tcp 192.168.49.2:8441: connect: connection refused
	I0602 17:24:18.354361  314124 pod_ready.go:38] duration metric: took 1.795714024s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 17:24:18.354402  314124 api_server.go:51] waiting for apiserver process to appear ...
	I0602 17:24:18.354459  314124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0602 17:24:18.366024  314124 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0602 17:24:18.366049  314124 retry.go:31] will retry after 520.108592ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W0602 17:24:18.373625  314124 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0602 17:24:18.373647  314124 retry.go:31] will retry after 667.587979ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0602 17:24:18.874524  314124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 17:24:18.886553  314124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0602 17:24:18.931543  314124 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0602 17:24:18.931564  314124 retry.go:31] will retry after 477.256235ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0602 17:24:19.041786  314124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0602 17:24:19.090252  314124 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0602 17:24:19.090276  314124 retry.go:31] will retry after 553.938121ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0602 17:24:19.374656  314124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 17:24:19.384688  314124 api_server.go:71] duration metric: took 3.01563446s to wait for apiserver process to appear ...
	I0602 17:24:19.384710  314124 api_server.go:87] waiting for apiserver healthz status ...
	I0602 17:24:19.384722  314124 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0602 17:24:19.409383  314124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 17:24:19.645143  314124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 17:24:21.249338  314124 api_server.go:266] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0602 17:24:21.249360  314124 api_server.go:102] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0602 17:24:21.750021  314124 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0602 17:24:21.756364  314124 api_server.go:266] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 17:24:21.756390  314124 api_server.go:102] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 17:24:22.249617  314124 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0602 17:24:22.255243  314124 api_server.go:266] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 17:24:22.255275  314124 api_server.go:102] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 17:24:22.749151  314124 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.339728937s)
	I0602 17:24:22.750463  314124 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0602 17:24:22.755235  314124 api_server.go:266] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0602 17:24:22.755252  314124 api_server.go:102] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0602 17:24:22.766741  314124 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.121564749s)
	I0602 17:24:22.770130  314124 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0602 17:24:22.771560  314124 addons.go:417] enableAddons completed in 6.402429252s
	I0602 17:24:23.250208  314124 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0602 17:24:23.255146  314124 api_server.go:266] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0602 17:24:23.261803  314124 api_server.go:140] control plane version: v1.23.6
	I0602 17:24:23.261823  314124 api_server.go:130] duration metric: took 3.87710698s to wait for apiserver health ...
	I0602 17:24:23.261831  314124 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 17:24:23.267129  314124 system_pods.go:59] 7 kube-system pods found
	I0602 17:24:23.267148  314124 system_pods.go:61] "coredns-64897985d-fqvms" [6621aa03-5b38-445f-b662-d6838aedd8c1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0602 17:24:23.267155  314124 system_pods.go:61] "etcd-functional-20220602171905-283122" [16e305ca-68dc-4b9a-ad48-e11022ccb3e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0602 17:24:23.267159  314124 system_pods.go:61] "kube-apiserver-functional-20220602171905-283122" [adc990a7-6ba6-452c-a1b1-8f2f13806281] Pending
	I0602 17:24:23.267165  314124 system_pods.go:61] "kube-controller-manager-functional-20220602171905-283122" [cc1fee66-dc1d-49bd-af72-33b735fa7634] Running
	I0602 17:24:23.267168  314124 system_pods.go:61] "kube-proxy-x5hdb" [39bcad9e-d675-48d1-924d-ce7c5ab64b2c] Running
	I0602 17:24:23.267172  314124 system_pods.go:61] "kube-scheduler-functional-20220602171905-283122" [06201958-917d-4b61-896b-8eed8874b229] Running
	I0602 17:24:23.267176  314124 system_pods.go:61] "storage-provisioner" [ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 17:24:23.267180  314124 system_pods.go:74] duration metric: took 5.344668ms to wait for pod list to return data ...
	I0602 17:24:23.267188  314124 default_sa.go:34] waiting for default service account to be created ...
	I0602 17:24:23.269820  314124 default_sa.go:45] found service account: "default"
	I0602 17:24:23.269835  314124 default_sa.go:55] duration metric: took 2.641993ms for default service account to be created ...
	I0602 17:24:23.269844  314124 system_pods.go:116] waiting for k8s-apps to be running ...
	I0602 17:24:23.275579  314124 system_pods.go:86] 7 kube-system pods found
	I0602 17:24:23.275601  314124 system_pods.go:89] "coredns-64897985d-fqvms" [6621aa03-5b38-445f-b662-d6838aedd8c1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0602 17:24:23.275614  314124 system_pods.go:89] "etcd-functional-20220602171905-283122" [16e305ca-68dc-4b9a-ad48-e11022ccb3e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0602 17:24:23.275622  314124 system_pods.go:89] "kube-apiserver-functional-20220602171905-283122" [adc990a7-6ba6-452c-a1b1-8f2f13806281] Pending
	I0602 17:24:23.275636  314124 system_pods.go:89] "kube-controller-manager-functional-20220602171905-283122" [cc1fee66-dc1d-49bd-af72-33b735fa7634] Running
	I0602 17:24:23.275642  314124 system_pods.go:89] "kube-proxy-x5hdb" [39bcad9e-d675-48d1-924d-ce7c5ab64b2c] Running
	I0602 17:24:23.275648  314124 system_pods.go:89] "kube-scheduler-functional-20220602171905-283122" [06201958-917d-4b61-896b-8eed8874b229] Running
	I0602 17:24:23.275656  314124 system_pods.go:89] "storage-provisioner" [ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 17:24:23.275676  314124 retry.go:31] will retry after 199.621189ms: missing components: kube-apiserver
	I0602 17:24:23.481733  314124 system_pods.go:86] 7 kube-system pods found
	I0602 17:24:23.481752  314124 system_pods.go:89] "coredns-64897985d-fqvms" [6621aa03-5b38-445f-b662-d6838aedd8c1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0602 17:24:23.481766  314124 system_pods.go:89] "etcd-functional-20220602171905-283122" [16e305ca-68dc-4b9a-ad48-e11022ccb3e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0602 17:24:23.481774  314124 system_pods.go:89] "kube-apiserver-functional-20220602171905-283122" [adc990a7-6ba6-452c-a1b1-8f2f13806281] Pending
	I0602 17:24:23.481780  314124 system_pods.go:89] "kube-controller-manager-functional-20220602171905-283122" [cc1fee66-dc1d-49bd-af72-33b735fa7634] Running
	I0602 17:24:23.481784  314124 system_pods.go:89] "kube-proxy-x5hdb" [39bcad9e-d675-48d1-924d-ce7c5ab64b2c] Running
	I0602 17:24:23.481787  314124 system_pods.go:89] "kube-scheduler-functional-20220602171905-283122" [06201958-917d-4b61-896b-8eed8874b229] Running
	I0602 17:24:23.481792  314124 system_pods.go:89] "storage-provisioner" [ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 17:24:23.481807  314124 retry.go:31] will retry after 281.392478ms: missing components: kube-apiserver
	I0602 17:24:23.770419  314124 system_pods.go:86] 7 kube-system pods found
	I0602 17:24:23.770442  314124 system_pods.go:89] "coredns-64897985d-fqvms" [6621aa03-5b38-445f-b662-d6838aedd8c1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0602 17:24:23.770451  314124 system_pods.go:89] "etcd-functional-20220602171905-283122" [16e305ca-68dc-4b9a-ad48-e11022ccb3e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0602 17:24:23.770456  314124 system_pods.go:89] "kube-apiserver-functional-20220602171905-283122" [adc990a7-6ba6-452c-a1b1-8f2f13806281] Pending
	I0602 17:24:23.770463  314124 system_pods.go:89] "kube-controller-manager-functional-20220602171905-283122" [cc1fee66-dc1d-49bd-af72-33b735fa7634] Running
	I0602 17:24:23.770469  314124 system_pods.go:89] "kube-proxy-x5hdb" [39bcad9e-d675-48d1-924d-ce7c5ab64b2c] Running
	I0602 17:24:23.770475  314124 system_pods.go:89] "kube-scheduler-functional-20220602171905-283122" [06201958-917d-4b61-896b-8eed8874b229] Running
	I0602 17:24:23.770483  314124 system_pods.go:89] "storage-provisioner" [ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 17:24:23.770498  314124 retry.go:31] will retry after 393.596217ms: missing components: kube-apiserver
	I0602 17:24:24.170406  314124 system_pods.go:86] 7 kube-system pods found
	I0602 17:24:24.170423  314124 system_pods.go:89] "coredns-64897985d-fqvms" [6621aa03-5b38-445f-b662-d6838aedd8c1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0602 17:24:24.170430  314124 system_pods.go:89] "etcd-functional-20220602171905-283122" [16e305ca-68dc-4b9a-ad48-e11022ccb3e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0602 17:24:24.170434  314124 system_pods.go:89] "kube-apiserver-functional-20220602171905-283122" [adc990a7-6ba6-452c-a1b1-8f2f13806281] Pending
	I0602 17:24:24.170438  314124 system_pods.go:89] "kube-controller-manager-functional-20220602171905-283122" [cc1fee66-dc1d-49bd-af72-33b735fa7634] Running
	I0602 17:24:24.170442  314124 system_pods.go:89] "kube-proxy-x5hdb" [39bcad9e-d675-48d1-924d-ce7c5ab64b2c] Running
	I0602 17:24:24.170448  314124 system_pods.go:89] "kube-scheduler-functional-20220602171905-283122" [06201958-917d-4b61-896b-8eed8874b229] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0602 17:24:24.170454  314124 system_pods.go:89] "storage-provisioner" [ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 17:24:24.170466  314124 retry.go:31] will retry after 564.853506ms: missing components: kube-apiserver
	I0602 17:24:24.742380  314124 system_pods.go:86] 7 kube-system pods found
	I0602 17:24:24.742405  314124 system_pods.go:89] "coredns-64897985d-fqvms" [6621aa03-5b38-445f-b662-d6838aedd8c1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0602 17:24:24.742415  314124 system_pods.go:89] "etcd-functional-20220602171905-283122" [16e305ca-68dc-4b9a-ad48-e11022ccb3e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0602 17:24:24.742420  314124 system_pods.go:89] "kube-apiserver-functional-20220602171905-283122" [adc990a7-6ba6-452c-a1b1-8f2f13806281] Pending
	I0602 17:24:24.742429  314124 system_pods.go:89] "kube-controller-manager-functional-20220602171905-283122" [cc1fee66-dc1d-49bd-af72-33b735fa7634] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0602 17:24:24.742436  314124 system_pods.go:89] "kube-proxy-x5hdb" [39bcad9e-d675-48d1-924d-ce7c5ab64b2c] Running
	I0602 17:24:24.742444  314124 system_pods.go:89] "kube-scheduler-functional-20220602171905-283122" [06201958-917d-4b61-896b-8eed8874b229] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0602 17:24:24.742452  314124 system_pods.go:89] "storage-provisioner" [ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 17:24:24.742469  314124 retry.go:31] will retry after 523.151816ms: missing components: kube-apiserver
	I0602 17:24:25.271674  314124 system_pods.go:86] 7 kube-system pods found
	I0602 17:24:25.271693  314124 system_pods.go:89] "coredns-64897985d-fqvms" [6621aa03-5b38-445f-b662-d6838aedd8c1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0602 17:24:25.271701  314124 system_pods.go:89] "etcd-functional-20220602171905-283122" [16e305ca-68dc-4b9a-ad48-e11022ccb3e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0602 17:24:25.271706  314124 system_pods.go:89] "kube-apiserver-functional-20220602171905-283122" [adc990a7-6ba6-452c-a1b1-8f2f13806281] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0602 17:24:25.271713  314124 system_pods.go:89] "kube-controller-manager-functional-20220602171905-283122" [cc1fee66-dc1d-49bd-af72-33b735fa7634] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0602 17:24:25.271716  314124 system_pods.go:89] "kube-proxy-x5hdb" [39bcad9e-d675-48d1-924d-ce7c5ab64b2c] Running
	I0602 17:24:25.271721  314124 system_pods.go:89] "kube-scheduler-functional-20220602171905-283122" [06201958-917d-4b61-896b-8eed8874b229] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0602 17:24:25.271726  314124 system_pods.go:89] "storage-provisioner" [ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0602 17:24:25.271732  314124 system_pods.go:126] duration metric: took 2.001883424s to wait for k8s-apps to be running ...
	I0602 17:24:25.271739  314124 system_svc.go:44] waiting for kubelet service to be running ....
	I0602 17:24:25.271782  314124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 17:24:25.281611  314124 system_svc.go:56] duration metric: took 9.861891ms WaitForService to wait for kubelet.
	I0602 17:24:25.281628  314124 kubeadm.go:572] duration metric: took 8.912582827s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0602 17:24:25.281648  314124 node_conditions.go:102] verifying NodePressure condition ...
	I0602 17:24:25.284667  314124 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0602 17:24:25.284681  314124 node_conditions.go:123] node cpu capacity is 8
	I0602 17:24:25.284694  314124 node_conditions.go:105] duration metric: took 3.041374ms to run NodePressure ...
	I0602 17:24:25.284707  314124 start.go:213] waiting for startup goroutines ...
	I0602 17:24:25.325937  314124 start.go:504] kubectl: 1.24.1, cluster: 1.23.6 (minor skew: 1)
	I0602 17:24:25.329409  314124 out.go:177] * Done! kubectl is now configured to use "functional-20220602171905-283122" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-06-02 17:19:13 UTC, end at Thu 2022-06-02 17:24:26 UTC. --
	Jun 02 17:19:44 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:19:44.438329305Z" level=info msg="ignoring event" container=5d32f6a9636e00574a59bf80a3022c9f32c9a919b93e730390fe6b3f058aef23 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:19:45 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:19:45.217756150Z" level=info msg="ignoring event" container=030a27e357591fe913513f74ce721806f110d766b284942aaaa9bcc3ab3b273f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:19:53 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:19:53.558942911Z" level=info msg="ignoring event" container=cc1c52bc8ed9b7a1248cf49533d921e7897d9d007099af3179b30ce3af123281 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:19:53 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:19:53.616555324Z" level=info msg="ignoring event" container=459966c4fd322a5d1a2e33573d00e13ad18af3b60cedd804ff0a78d39ae016c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:19:59 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:19:59.495672168Z" level=info msg="ignoring event" container=1bd5bd06a412955fb9f8322f385375351740a9384f53bca989c25baeeddb7d48 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:20:26 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:20:26.484299801Z" level=info msg="ignoring event" container=a302d202d77edacd97ede5b5b82d48981303425268a80c4494742f5906484b84 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:21:19 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:21:19.493219765Z" level=info msg="ignoring event" container=549fa7dbb724b7c0389bc435dbc6e65ad45cd84f0286bfb697cfa776458f42d2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:22:47 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:22:47.501050851Z" level=info msg="ignoring event" container=81600bc5024615b2f9e08dbaa723148a55c894ddae82385268949678d5c67d63 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:08 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:08.844923570Z" level=info msg="ignoring event" container=3f293bdfbc88ef714a820d19967dbe67848494da2946aef2246372d4df5ac038 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:08 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:08.844978573Z" level=info msg="ignoring event" container=4e587023f9273084fcc4fc1ccebdac1ae2780a506095fdebf99b98787823c9c5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:08 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:08.845046359Z" level=info msg="ignoring event" container=4fa66a37576fbff26fd89cc5d29ef6fb2790277213da6c9b2d1b68373d1e9f4f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:08 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:08.845064770Z" level=info msg="ignoring event" container=895c379f6918084806c79f732f4051f783c3d9d1ee08f4f11ee2440ce7a3eb95 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:08 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:08.854491621Z" level=info msg="ignoring event" container=730f8be15784d31758a1ffa15f43ecc24838c21813eb11671016bc362f59a0ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:08 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:08.935934072Z" level=info msg="ignoring event" container=0544243af64e46b40d9ae2a86b3f847b54ee76a33b0f3d29ac1721eac86e94e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:08 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:08.936536061Z" level=info msg="ignoring event" container=ae98ee0fc94feb6897c917f6b210ca262461a259a0f8ef93da87b9eeeae38556 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:09 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:09.034031933Z" level=info msg="ignoring event" container=4b794e67eb6e3a8f715632f21fed13811501658f42db268e7aff5e09a7d0dd3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:09 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:09.035835009Z" level=info msg="ignoring event" container=b2cbc18e9595c4538045222bd0302eaf7edfa2dca730e18fb44a5bfee30ac53a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:09 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:09.068429157Z" level=info msg="ignoring event" container=5353917f3d71e2c8eb1026adff6a765f04b08878565484fd6471af65d307e8dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:09 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:09.453926823Z" level=info msg="ignoring event" container=22f9f15b9a3ff477d075ef3e12b621e470e48ec1d2156cf9e4dc638719099d31 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:09 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:09.456833730Z" level=info msg="ignoring event" container=92951e242fba8b7554198442693b938b282b22612387e2389936b56444d193b4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:13 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:13.848379818Z" level=info msg="ignoring event" container=b8aba85da98e3534f56f991138893080c425d892a83538e50da55f94975e1f4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:16 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:16.867632840Z" level=info msg="ignoring event" container=9dda91ef327edf5a32279dd9fcf87a912c929e5d5110ced1b392ca4cf7782558 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:17 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:17.435758519Z" level=info msg="ignoring event" container=cd87610507c56062c12ab62a85742939cc69769b9f65a31c73d1129bba837c3b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:17 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:17.545371070Z" level=info msg="ignoring event" container=cc1cadc790059b509c3f2c1c7b79375145832a2550c869f11789ae75cb4449cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:21 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:21.954403562Z" level=info msg="ignoring event" container=109f6c67be4d3e212df55bed330bdcb97c1fc48bb17cf72d49dea2b80e9ce6b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	c276864164b49       a4ca41631cc7a       4 seconds ago       Running             coredns                   1                   fe66982393c40
	109f6c67be4d3       6e38f40d628db       5 seconds ago       Exited              storage-provisioner       6                   c5a904a60d2e4
	1c227889c513e       8fa62c12256df       8 seconds ago       Running             kube-apiserver            1                   32cbbdc559078
	9dda91ef327ed       8fa62c12256df       10 seconds ago      Exited              kube-apiserver            0                   32cbbdc559078
	5cabfd9de847a       595f327f224a4       17 seconds ago      Running             kube-scheduler            1                   19dfc0de11563
	bc1df717c35fc       4c03754524064       17 seconds ago      Running             kube-proxy                1                   341b9b7854419
	020a23185b37e       df7b72818ad2e       17 seconds ago      Running             kube-controller-manager   1                   4d6e85f997653
	6280eaa54ed8b       25f8c7f3da61c       17 seconds ago      Running             etcd                      1                   b937529554388
	b8aba85da98e3       a4ca41631cc7a       4 minutes ago       Exited              coredns                   0                   0544243af64e4
	4b794e67eb6e3       4c03754524064       4 minutes ago       Exited              kube-proxy                0                   895c379f69180
	b2cbc18e9595c       25f8c7f3da61c       5 minutes ago       Exited              etcd                      0                   3f293bdfbc88e
	22f9f15b9a3ff       595f327f224a4       5 minutes ago       Exited              kube-scheduler            0                   4fa66a37576fb
	5353917f3d71e       df7b72818ad2e       5 minutes ago       Exited              kube-controller-manager   0                   ae98ee0fc94fe
	
	* 
	* ==> coredns [b8aba85da98e] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [c276864164b4] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	* 
	* ==> describe nodes <==
	* Name:               functional-20220602171905-283122
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-20220602171905-283122
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae
	                    minikube.k8s.io/name=functional-20220602171905-283122
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_02T17_19_29_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Jun 2022 17:19:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20220602171905-283122
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Jun 2022 17:24:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Jun 2022 17:24:15 +0000   Thu, 02 Jun 2022 17:19:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Jun 2022 17:24:15 +0000   Thu, 02 Jun 2022 17:19:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Jun 2022 17:24:15 +0000   Thu, 02 Jun 2022 17:19:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Jun 2022 17:24:15 +0000   Thu, 02 Jun 2022 17:24:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20220602171905-283122
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873816Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873816Ki
	  pods:               110
	System Info:
	  Machine ID:                 a34bb2508bce429bb90502b0ef044420
	  System UUID:                e5257698-d461-4a2f-b7e2-77ca6f3add35
	  Boot ID:                    eac629ea-39e3-4b75-b891-94bd750a4fe6
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-fqvms                                     100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m44s
	  kube-system                 etcd-functional-20220602171905-283122                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-apiserver-functional-20220602171905-283122             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kube-controller-manager-functional-20220602171905-283122    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-proxy-x5hdb                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-scheduler-functional-20220602171905-283122             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 10s                  kube-proxy  
	  Normal  Starting                 4m43s                kube-proxy  
	  Normal  NodeHasSufficientMemory  5m5s (x4 over 5m5s)  kubelet     Node functional-20220602171905-283122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m5s (x4 over 5m5s)  kubelet     Node functional-20220602171905-283122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m5s (x3 over 5m5s)  kubelet     Node functional-20220602171905-283122 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m5s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m5s                 kubelet     Starting kubelet.
	  Normal  Starting                 4m57s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientPID     4m57s                kubelet     Node functional-20220602171905-283122 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             4m57s                kubelet     Node functional-20220602171905-283122 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  4m57s                kubelet     Node functional-20220602171905-283122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m57s                kubelet     Node functional-20220602171905-283122 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  4m57s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m47s                kubelet     Node functional-20220602171905-283122 status is now: NodeReady
	  Normal  Starting                 11s                  kubelet     Starting kubelet.
	  Normal  NodeHasNoDiskPressure    11s                  kubelet     Node functional-20220602171905-283122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s                  kubelet     Node functional-20220602171905-283122 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             11s                  kubelet     Node functional-20220602171905-283122 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  11s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                11s                  kubelet     Node functional-20220602171905-283122 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  11s                  kubelet     Node functional-20220602171905-283122 status is now: NodeHasSufficientMemory
	
	* 
	* ==> dmesg <==
	* [  +0.000032] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	[  +2.947754] IPv4: martian source 10.244.0.133 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	[  +1.019849] IPv4: martian source 10.244.0.133 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	[  +1.023843] IPv4: martian source 10.244.0.133 from 10.244.0.4, on dev eth0
	[  +0.000036] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	[Jun 2 17:03] IPv4: martian source 10.244.0.133 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	[  +1.025582] IPv4: martian source 10.244.0.133 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	[  +1.023870] IPv4: martian source 10.244.0.133 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	[  +2.947784] IPv4: martian source 10.244.0.133 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	[  +1.019868] IPv4: martian source 10.244.0.133 from 10.244.0.4, on dev eth0
	[  +0.000041] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	[  +1.023839] IPv4: martian source 10.244.0.133 from 10.244.0.4, on dev eth0
	[  +0.000032] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	[  +2.959752] IPv4: martian source 10.244.0.133 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	[  +1.007840] IPv4: martian source 10.244.0.133 from 10.244.0.4, on dev eth0
	[  +0.000034] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	[  +1.023865] IPv4: martian source 10.244.0.133 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 66 26 fd 92 89 86 08 06
	
	* 
	* ==> etcd [6280eaa54ed8] <==
	* {"level":"info","ts":"2022-06-02T17:24:10.345Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.1","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-06-02T17:24:10.346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-06-02T17:24:10.346Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-06-02T17:24:10.346Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:24:10.346Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:24:10.348Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-02T17:24:10.348Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-02T17:24:10.348Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2022-06-02T17:24:10.352Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-02T17:24:10.352Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-02T17:24:10.352Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220602171905-283122 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T17:24:11.236Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-02T17:24:11.236Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-02T17:24:11.237Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-02T17:24:11.237Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	
	* 
	* ==> etcd [b2cbc18e9595] <==
	* {"level":"info","ts":"2022-06-02T17:19:22.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-02T17:19:22.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-06-02T17:19:22.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-06-02T17:19:22.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-02T17:19:22.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-06-02T17:19:22.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-02T17:19:22.948Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220602171905-283122 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-02T17:19:22.948Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T17:19:22.949Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:19:22.949Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T17:19:22.949Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-02T17:19:22.949Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-02T17:19:22.949Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:19:22.949Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:19:22.949Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:19:22.950Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-06-02T17:19:22.950Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-02T17:24:08.848Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-06-02T17:24:08.848Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20220602171905-283122","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2022/06/02 17:24:08 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/02 17:24:08 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-06-02T17:24:08.938Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2022-06-02T17:24:08.940Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-02T17:24:08.941Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-02T17:24:08.941Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20220602171905-283122","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> kernel <==
	*  17:24:26 up  2:06,  0 users,  load average: 0.31, 0.42, 0.69
	Linux functional-20220602171905-283122 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [1c227889c513] <==
	* I0602 17:24:21.208292       1 establishing_controller.go:76] Starting EstablishingController
	I0602 17:24:21.208311       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0602 17:24:21.208334       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0602 17:24:21.208349       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0602 17:24:21.208387       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0602 17:24:21.208401       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	I0602 17:24:21.213790       1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0602 17:24:21.215854       1 available_controller.go:491] Starting AvailableConditionController
	I0602 17:24:21.215880       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	E0602 17:24:21.249357       1 controller.go:157] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0602 17:24:21.334011       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0602 17:24:21.338132       1 cache.go:39] Caches are synced for autoregister controller
	I0602 17:24:21.334015       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0602 17:24:21.338298       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0602 17:24:21.204985       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0602 17:24:21.338588       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0602 17:24:21.338599       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0602 17:24:21.339046       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0602 17:24:21.340513       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0602 17:24:22.234058       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0602 17:24:22.234100       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0602 17:24:22.239133       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0602 17:24:25.393716       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0602 17:24:26.249757       1 controller.go:611] quota admission added evaluator for: endpoints
	I0602 17:24:26.311033       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [9dda91ef327e] <==
	* I0602 17:24:16.847387       1 server.go:565] external host was not specified, using 192.168.49.2
	I0602 17:24:16.847934       1 server.go:172] Version: v1.23.6
	E0602 17:24:16.848289       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	* 
	* ==> kube-controller-manager [020a23185b37] <==
	* I0602 17:24:26.269092       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0602 17:24:26.300915       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0602 17:24:26.301978       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0602 17:24:26.333912       1 shared_informer.go:247] Caches are synced for GC 
	I0602 17:24:26.333912       1 shared_informer.go:247] Caches are synced for node 
	I0602 17:24:26.333983       1 range_allocator.go:173] Starting range CIDR allocator
	I0602 17:24:26.333995       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0602 17:24:26.334011       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0602 17:24:26.337326       1 shared_informer.go:247] Caches are synced for TTL 
	I0602 17:24:26.348219       1 shared_informer.go:247] Caches are synced for attach detach 
	I0602 17:24:26.426505       1 shared_informer.go:247] Caches are synced for taint 
	I0602 17:24:26.426625       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	W0602 17:24:26.426716       1 node_lifecycle_controller.go:1012] Missing timestamp for Node functional-20220602171905-283122. Assuming now as a timestamp.
	I0602 17:24:26.426740       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0602 17:24:26.426757       1 node_lifecycle_controller.go:1213] Controller detected that zone  is now in state Normal.
	I0602 17:24:26.426835       1 event.go:294] "Event occurred" object="functional-20220602171905-283122" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-20220602171905-283122 event: Registered Node functional-20220602171905-283122 in Controller"
	I0602 17:24:26.448623       1 shared_informer.go:247] Caches are synced for deployment 
	I0602 17:24:26.450804       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0602 17:24:26.450824       1 shared_informer.go:247] Caches are synced for disruption 
	I0602 17:24:26.450835       1 disruption.go:371] Sending events to api server.
	I0602 17:24:26.452024       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 17:24:26.489543       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 17:24:26.875604       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 17:24:26.913265       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 17:24:26.913303       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-controller-manager [5353917f3d71] <==
	* I0602 17:19:41.044509       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0602 17:19:41.046329       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0602 17:19:41.054618       1 shared_informer.go:247] Caches are synced for node 
	I0602 17:19:41.054659       1 range_allocator.go:173] Starting range CIDR allocator
	I0602 17:19:41.054664       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0602 17:19:41.054673       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0602 17:19:41.059714       1 range_allocator.go:374] Set node functional-20220602171905-283122 PodCIDR to [10.244.0.0/24]
	I0602 17:19:41.095042       1 shared_informer.go:247] Caches are synced for endpoint 
	I0602 17:19:41.095495       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0602 17:19:41.139576       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0602 17:19:41.192984       1 shared_informer.go:247] Caches are synced for HPA 
	I0602 17:19:41.194248       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0602 17:19:41.248010       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 17:19:41.265878       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 17:19:41.293388       1 shared_informer.go:247] Caches are synced for disruption 
	I0602 17:19:41.293415       1 disruption.go:371] Sending events to api server.
	I0602 17:19:41.300273       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0602 17:19:41.347564       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0602 17:19:41.669206       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 17:19:41.702715       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-x5hdb"
	I0602 17:19:41.744331       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 17:19:41.744361       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0602 17:19:42.052920       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-fqvms"
	I0602 17:19:42.060506       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-bkxkg"
	I0602 17:19:42.153428       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-bkxkg"
	
	* 
	* ==> kube-proxy [4b794e67eb6e] <==
	* I0602 17:19:42.955955       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0602 17:19:42.956055       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0602 17:19:42.956101       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 17:19:43.046822       1 server_others.go:206] "Using iptables Proxier"
	I0602 17:19:43.046872       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 17:19:43.046883       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 17:19:43.046913       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 17:19:43.047532       1 server.go:656] "Version info" version="v1.23.6"
	I0602 17:19:43.048394       1 config.go:317] "Starting service config controller"
	I0602 17:19:43.048449       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 17:19:43.052555       1 config.go:226] "Starting endpoint slice config controller"
	I0602 17:19:43.052575       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 17:19:43.148657       1 shared_informer.go:247] Caches are synced for service config 
	I0602 17:19:43.153634       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-proxy [bc1df717c35f] <==
	* E0602 17:24:10.349444       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220602171905-283122": dial tcp 192.168.49.2:8441: connect: connection refused
	E0602 17:24:13.741547       1 node.go:152] Failed to retrieve node info: nodes "functional-20220602171905-283122" is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
	I0602 17:24:16.039874       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0602 17:24:16.039910       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0602 17:24:16.039948       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 17:24:16.065171       1 server_others.go:206] "Using iptables Proxier"
	I0602 17:24:16.065200       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 17:24:16.065206       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 17:24:16.065218       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 17:24:16.066337       1 server.go:656] "Version info" version="v1.23.6"
	I0602 17:24:16.067202       1 config.go:226] "Starting endpoint slice config controller"
	I0602 17:24:16.067228       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 17:24:16.067249       1 config.go:317] "Starting service config controller"
	I0602 17:24:16.067256       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 17:24:16.167335       1 shared_informer.go:247] Caches are synced for service config 
	I0602 17:24:16.167819       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [22f9f15b9a3f] <==
	* E0602 17:19:26.035397       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0602 17:19:26.841490       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0602 17:19:26.841523       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0602 17:19:26.896763       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0602 17:19:26.896815       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0602 17:19:26.938966       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0602 17:19:26.939009       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0602 17:19:26.944131       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0602 17:19:26.944164       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0602 17:19:26.968866       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0602 17:19:26.968913       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0602 17:19:27.035032       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0602 17:19:27.035075       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0602 17:19:27.035098       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0602 17:19:27.035079       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0602 17:19:27.124268       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0602 17:19:27.124307       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0602 17:19:27.235858       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0602 17:19:27.235899       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0602 17:19:27.273385       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0602 17:19:27.273429       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0602 17:19:29.756149       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0602 17:24:08.848174       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0602 17:24:08.849027       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0602 17:24:08.849168       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	
	* 
	* ==> kube-scheduler [5cabfd9de847] <==
	* W0602 17:24:13.643183       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0602 17:24:13.643220       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0602 17:24:13.643231       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0602 17:24:13.643240       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0602 17:24:13.741838       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
	I0602 17:24:13.743930       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0602 17:24:13.744002       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0602 17:24:13.744024       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0602 17:24:13.744186       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0602 17:24:13.845159       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0602 17:24:21.257394       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)
	E0602 17:24:21.257493       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
	E0602 17:24:21.257587       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	E0602 17:24:21.257643       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: unknown (get pods)
	E0602 17:24:21.258016       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
	E0602 17:24:21.258123       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
	E0602 17:24:21.258145       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: unknown (get namespaces)
	E0602 17:24:21.258349       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
	E0602 17:24:21.258485       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: unknown (get services)
	E0602 17:24:21.258571       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
	E0602 17:24:21.258629       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
	E0602 17:24:21.258667       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
	E0602 17:24:21.261209       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	E0602 17:24:21.339173       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
	E0602 17:24:21.346397       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-06-02 17:19:13 UTC, end at Thu 2022-06-02 17:24:27 UTC. --
	Jun 02 17:24:17 functional-20220602171905-283122 kubelet[7096]: E0602 17:24:17.779287    7096 remote_runtime.go:572] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 92951e242fba8b7554198442693b938b282b22612387e2389936b56444d193b4" containerID="92951e242fba8b7554198442693b938b282b22612387e2389936b56444d193b4"
	Jun 02 17:24:17 functional-20220602171905-283122 kubelet[7096]: I0602 17:24:17.779345    7096 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:docker ID:92951e242fba8b7554198442693b938b282b22612387e2389936b56444d193b4} err="failed to get container status \"92951e242fba8b7554198442693b938b282b22612387e2389936b56444d193b4\": rpc error: code = Unknown desc = Error: No such container: 92951e242fba8b7554198442693b938b282b22612387e2389936b56444d193b4"
	Jun 02 17:24:17 functional-20220602171905-283122 kubelet[7096]: E0602 17:24:17.949723    7096 projected.go:199] Error preparing data for projected volume kube-api-access-k97kh for pod kube-system/storage-provisioner: failed to fetch token: Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner/token": dial tcp 192.168.49.2:8441: connect: connection refused
	Jun 02 17:24:17 functional-20220602171905-283122 kubelet[7096]: E0602 17:24:17.949838    7096 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4-kube-api-access-k97kh podName:ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4 nodeName:}" failed. No retries permitted until 2022-06-02 17:24:18.44981165 +0000 UTC m=+3.574187707 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k97kh" (UniqueName: "kubernetes.io/projected/ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4-kube-api-access-k97kh") pod "storage-provisioner" (UID: "ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4") : failed to fetch token: Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner/token": dial tcp 192.168.49.2:8441: connect: connection refused
	Jun 02 17:24:18 functional-20220602171905-283122 kubelet[7096]: E0602 17:24:18.149785    7096 kubelet.go:1742] "Failed creating a mirror pod for" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods\": dial tcp 192.168.49.2:8441: connect: connection refused" pod="kube-system/etcd-functional-20220602171905-283122"
	Jun 02 17:24:18 functional-20220602171905-283122 kubelet[7096]: E0602 17:24:18.349681    7096 kubelet.go:1742] "Failed creating a mirror pod for" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods\": dial tcp 192.168.49.2:8441: connect: connection refused" pod="kube-system/kube-controller-manager-functional-20220602171905-283122"
	Jun 02 17:24:18 functional-20220602171905-283122 kubelet[7096]: E0602 17:24:18.549359    7096 kubelet.go:1742] "Failed creating a mirror pod for" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods\": dial tcp 192.168.49.2:8441: connect: connection refused" pod="kube-system/kube-scheduler-functional-20220602171905-283122"
	Jun 02 17:24:18 functional-20220602171905-283122 kubelet[7096]: I0602 17:24:18.749669    7096 status_manager.go:688] "Failed to update status for pod" pod="kube-system/kube-proxy-x5hdb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39bcad9e-d675-48d1-924d-ce7c5ab64b2c\\\"},\\\"status\\\":{\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"docker://bc1df717c35fce95cb8e25574741ae6471ac2c81b66b8e61c8d144021cee7acc\\\",\\\"image\\\":\\\"k8s.gcr.io/kube-proxy:v1.23.6\\\",\\\"imageID\\\":\\\"docker-pullable://k8s.gcr.io/kube-proxy@sha256:cc007fb495f362f18c74e6f5552060c6785ca2b802a5067251de55c7cc880741\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"docker://4b794e67eb6e3a8f715632f21fed13811501658f42db268e7aff5e09a7d0dd3e\\\",\\\"exitCode\\\":2,\\\"finishedAt\\\":\\\"2022-06-02T17:24:08Z\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2022-06-02T17:19:42Z\\\"}},\\\"name\\\":\\\"kube-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"sta
te\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2022-06-02T17:24:10Z\\\"}}}]}}\" for pod \"kube-system\"/\"kube-proxy-x5hdb\": Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-x5hdb/status\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jun 02 17:24:18 functional-20220602171905-283122 kubelet[7096]: E0602 17:24:18.948998    7096 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-20220602171905-283122\": dial tcp 192.168.49.2:8441: connect: connection refused" pod="kube-system/kube-apiserver-functional-20220602171905-283122"
	Jun 02 17:24:18 functional-20220602171905-283122 kubelet[7096]: I0602 17:24:18.949152    7096 scope.go:110] "RemoveContainer" containerID="9dda91ef327edf5a32279dd9fcf87a912c929e5d5110ced1b392ca4cf7782558"
	Jun 02 17:24:19 functional-20220602171905-283122 kubelet[7096]: E0602 17:24:19.356015    7096 remote_runtime.go:479] "StopContainer from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: cc1cadc790059b509c3f2c1c7b79375145832a2550c869f11789ae75cb4449cd" containerID="cc1cadc790059b509c3f2c1c7b79375145832a2550c869f11789ae75cb4449cd"
	Jun 02 17:24:19 functional-20220602171905-283122 kubelet[7096]: E0602 17:24:19.356072    7096 kuberuntime_container.go:728] "Container termination failed with gracePeriod" err="rpc error: code = Unknown desc = Error response from daemon: No such container: cc1cadc790059b509c3f2c1c7b79375145832a2550c869f11789ae75cb4449cd" pod="kube-system/kube-apiserver-functional-20220602171905-283122" podUID=aee0e67b678ad6ab5be7af480fcc4dac containerName="kube-apiserver" containerID="docker://cc1cadc790059b509c3f2c1c7b79375145832a2550c869f11789ae75cb4449cd" gracePeriod=1
	Jun 02 17:24:19 functional-20220602171905-283122 kubelet[7096]: E0602 17:24:19.356092    7096 kuberuntime_container.go:753] "Kill container failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: cc1cadc790059b509c3f2c1c7b79375145832a2550c869f11789ae75cb4449cd" pod="kube-system/kube-apiserver-functional-20220602171905-283122" podUID=aee0e67b678ad6ab5be7af480fcc4dac containerName="kube-apiserver" containerID={Type:docker ID:cc1cadc790059b509c3f2c1c7b79375145832a2550c869f11789ae75cb4449cd}
	Jun 02 17:24:19 functional-20220602171905-283122 kubelet[7096]: E0602 17:24:19.357491    7096 kubelet.go:1808] failed to "KillContainer" for "kube-apiserver" with KillContainerError: "rpc error: code = Unknown desc = Error response from daemon: No such container: cc1cadc790059b509c3f2c1c7b79375145832a2550c869f11789ae75cb4449cd"
	Jun 02 17:24:19 functional-20220602171905-283122 kubelet[7096]: E0602 17:24:19.357543    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-apiserver\" with KillContainerError: \"rpc error: code = Unknown desc = Error response from daemon: No such container: cc1cadc790059b509c3f2c1c7b79375145832a2550c869f11789ae75cb4449cd\"" pod="kube-system/kube-apiserver-functional-20220602171905-283122" podUID=aee0e67b678ad6ab5be7af480fcc4dac
	Jun 02 17:24:19 functional-20220602171905-283122 kubelet[7096]: I0602 17:24:19.358642    7096 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=aee0e67b678ad6ab5be7af480fcc4dac path="/var/lib/kubelet/pods/aee0e67b678ad6ab5be7af480fcc4dac/volumes"
	Jun 02 17:24:19 functional-20220602171905-283122 kubelet[7096]: I0602 17:24:19.783426    7096 kubelet.go:1724] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-20220602171905-283122" podUID=05a32744-7331-4979-8b90-c5b3b18fe049
	Jun 02 17:24:21 functional-20220602171905-283122 kubelet[7096]: I0602 17:24:21.556064    7096 kubelet.go:1729] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-20220602171905-283122"
	Jun 02 17:24:21 functional-20220602171905-283122 kubelet[7096]: I0602 17:24:21.654988    7096 scope.go:110] "RemoveContainer" containerID="81600bc5024615b2f9e08dbaa723148a55c894ddae82385268949678d5c67d63"
	Jun 02 17:24:22 functional-20220602171905-283122 kubelet[7096]: I0602 17:24:22.257743    7096 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-fqvms through plugin: invalid network status for"
	Jun 02 17:24:22 functional-20220602171905-283122 kubelet[7096]: I0602 17:24:22.262398    7096 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="fe66982393c40ec9abaf26e15ec3b18ffce2350937e3d888849ff136bade32cb"
	Jun 02 17:24:23 functional-20220602171905-283122 kubelet[7096]: I0602 17:24:23.271895    7096 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-fqvms through plugin: invalid network status for"
	Jun 02 17:24:23 functional-20220602171905-283122 kubelet[7096]: I0602 17:24:23.287081    7096 scope.go:110] "RemoveContainer" containerID="81600bc5024615b2f9e08dbaa723148a55c894ddae82385268949678d5c67d63"
	Jun 02 17:24:23 functional-20220602171905-283122 kubelet[7096]: I0602 17:24:23.287387    7096 scope.go:110] "RemoveContainer" containerID="109f6c67be4d3e212df55bed330bdcb97c1fc48bb17cf72d49dea2b80e9ce6b6"
	Jun 02 17:24:23 functional-20220602171905-283122 kubelet[7096]: E0602 17:24:23.287564    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4)\"" pod="kube-system/storage-provisioner" podUID=ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4
	
	* 
	* ==> storage-provisioner [109f6c67be4d] <==
	* I0602 17:24:21.872070       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0602 17:24:21.936714       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-20220602171905-283122 -n functional-20220602171905-283122
helpers_test.go:261: (dbg) Run:  kubectl --context functional-20220602171905-283122 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestFunctional/serial/ComponentHealth]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context functional-20220602171905-283122 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context functional-20220602171905-283122 describe pod : exit status 1 (43.923623ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context functional-20220602171905-283122 describe pod : exit status 1
--- FAIL: TestFunctional/serial/ComponentHealth (2.55s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220602171905-283122 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:910: output didn't produce a URL
functional_test.go:902: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220602171905-283122 --alsologtostderr -v=1] ...
functional_test.go:902: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220602171905-283122 --alsologtostderr -v=1] stdout:
functional_test.go:902: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220602171905-283122 --alsologtostderr -v=1] stderr:
I0602 17:24:54.601637  324763 out.go:296] Setting OutFile to fd 1 ...
I0602 17:24:54.601851  324763 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0602 17:24:54.601862  324763 out.go:309] Setting ErrFile to fd 2...
I0602 17:24:54.601868  324763 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0602 17:24:54.601996  324763 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
I0602 17:24:54.602214  324763 mustload.go:65] Loading cluster: functional-20220602171905-283122
I0602 17:24:54.602616  324763 config.go:178] Loaded profile config "functional-20220602171905-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
I0602 17:24:54.603033  324763 cli_runner.go:164] Run: docker container inspect functional-20220602171905-283122 --format={{.State.Status}}
I0602 17:24:54.650273  324763 host.go:66] Checking if "functional-20220602171905-283122" exists ...
I0602 17:24:54.650559  324763 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0602 17:24:54.791118  324763 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:72 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:39 SystemTime:2022-06-02 17:24:54.68637646 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
I0602 17:24:54.791311  324763 api_server.go:165] Checking apiserver status ...
I0602 17:24:54.791380  324763 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0602 17:24:54.791436  324763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220602171905-283122
I0602 17:24:54.825233  324763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49457 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/functional-20220602171905-283122/id_rsa Username:docker}
I0602 17:24:54.929105  324763 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7701/cgroup
I0602 17:24:54.938309  324763 api_server.go:181] apiserver freezer: "11:freezer:/docker/ccf73bf4d78c7b204c2b467200b9fa8e02dddb5e9163040ed879a288e345c9be/kubepods/burstable/podac09769711e5e6472c63ac8d70effc93/1c227889c513eaee118d03c9e4bf3c1e86d411fafa8eab23a2d54f6813fe4491"
I0602 17:24:54.938396  324763 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ccf73bf4d78c7b204c2b467200b9fa8e02dddb5e9163040ed879a288e345c9be/kubepods/burstable/podac09769711e5e6472c63ac8d70effc93/1c227889c513eaee118d03c9e4bf3c1e86d411fafa8eab23a2d54f6813fe4491/freezer.state
I0602 17:24:54.946023  324763 api_server.go:203] freezer state: "THAWED"
I0602 17:24:54.946059  324763 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0602 17:24:54.951760  324763 api_server.go:266] https://192.168.49.2:8441/healthz returned 200:
ok
W0602 17:24:54.951820  324763 out.go:239] * Enabling dashboard ...
* Enabling dashboard ...
I0602 17:24:54.952052  324763 config.go:178] Loaded profile config "functional-20220602171905-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
I0602 17:24:54.952070  324763 addons.go:65] Setting dashboard=true in profile "functional-20220602171905-283122"
I0602 17:24:54.952079  324763 addons.go:153] Setting addon dashboard=true in "functional-20220602171905-283122"
I0602 17:24:54.952118  324763 host.go:66] Checking if "functional-20220602171905-283122" exists ...
I0602 17:24:54.952588  324763 cli_runner.go:164] Run: docker container inspect functional-20220602171905-283122 --format={{.State.Status}}
I0602 17:24:54.990762  324763 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
I0602 17:24:54.992868  324763 out.go:177]   - Using image kubernetesui/metrics-scraper:v1.0.8
I0602 17:24:54.994492  324763 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0602 17:24:54.994517  324763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0602 17:24:54.994578  324763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220602171905-283122
I0602 17:24:55.031217  324763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49457 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/functional-20220602171905-283122/id_rsa Username:docker}
I0602 17:24:55.123417  324763 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0602 17:24:55.123446  324763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0602 17:24:55.137709  324763 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0602 17:24:55.137748  324763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0602 17:24:55.151596  324763 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0602 17:24:55.151628  324763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0602 17:24:55.208376  324763 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0602 17:24:55.208406  324763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4278 bytes)
I0602 17:24:55.224353  324763 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
I0602 17:24:55.224387  324763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0602 17:24:55.240059  324763 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0602 17:24:55.240097  324763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0602 17:24:55.256221  324763 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0602 17:24:55.256263  324763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0602 17:24:55.271126  324763 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0602 17:24:55.271155  324763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0602 17:24:55.289730  324763 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0602 17:24:55.289798  324763 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0602 17:24:55.344990  324763 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0602 17:24:56.352571  324763 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.007480463s)
I0602 17:24:56.352642  324763 addons.go:116] Writing out "functional-20220602171905-283122" config to set dashboard=true...
W0602 17:24:56.353002  324763 out.go:239] * Verifying dashboard health ...
* Verifying dashboard health ...
I0602 17:24:56.354243  324763 kapi.go:59] client config for functional-20220602171905-283122: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/function
al-20220602171905-283122/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17122e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0602 17:24:56.365057  324763 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  216b2d01-9260-482d-a2fe-df6adb7281e6 834 0 2022-06-02 17:24:56 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] []  [{kubectl-client-side-apply Update v1 2022-06-02 17:24:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.111.68.154,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.111.68.154],IPFamilies:[IPv4],AllocateLoadBalanc
erNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0602 17:24:56.365224  324763 out.go:239] * Launching proxy ...
* Launching proxy ...
I0602 17:24:56.365286  324763 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-20220602171905-283122 proxy --port 36195]
I0602 17:24:56.365559  324763 dashboard.go:157] Waiting for kubectl to output host:port ...
I0602 17:24:56.404831  324763 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0602 17:24:56.404927  324763 out.go:239] * Verifying proxy health ...
* Verifying proxy health ...
I0602 17:24:56.443161  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6b1a97b6-8e93-4a13-809e-4ead3c7b756a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:24:56 GMT]] Body:0xc0005989c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00101a100 TLS:<nil>}
I0602 17:24:56.443276  324763 retry.go:31] will retry after 110.466µs: Temporary Error: unexpected response code: 503
I0602 17:24:56.447927  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7a5e7eab-cd94-4ff2-a8e9-4f8c6e3f626f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:24:56 GMT]] Body:0xc000674b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0000e5100 TLS:<nil>}
I0602 17:24:56.447999  324763 retry.go:31] will retry after 216.077µs: Temporary Error: unexpected response code: 503
I0602 17:24:56.451673  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0d384cd6-1336-4ac4-b91a-96eff75648cc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:24:56 GMT]] Body:0xc000025280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00101a200 TLS:<nil>}
I0602 17:24:56.451753  324763 retry.go:31] will retry after 262.026µs: Temporary Error: unexpected response code: 503
I0602 17:24:56.455383  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0a3e155d-c14d-47d6-b85c-fa7e413388a1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:24:56 GMT]] Body:0xc000025540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000f1ef00 TLS:<nil>}
I0602 17:24:56.455456  324763 retry.go:31] will retry after 316.478µs: Temporary Error: unexpected response code: 503
I0602 17:24:56.461116  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f1565d18-69e1-4ead-abaf-d303135d5eca] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:24:56 GMT]] Body:0xc000674d00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000f1f000 TLS:<nil>}
I0602 17:24:56.461186  324763 retry.go:31] will retry after 468.098µs: Temporary Error: unexpected response code: 503
I0602 17:24:56.464568  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f92ad67f-808e-454d-b0c8-b57785e667f9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:24:56 GMT]] Body:0xc000674e00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00101a300 TLS:<nil>}
I0602 17:24:56.464626  324763 retry.go:31] will retry after 901.244µs: Temporary Error: unexpected response code: 503
I0602 17:24:56.468426  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[930afd05-bfa7-49f8-b0fc-fa566714315e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:24:56 GMT]] Body:0xc000598d00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000f1f100 TLS:<nil>}
I0602 17:24:56.468493  324763 retry.go:31] will retry after 644.295µs: Temporary Error: unexpected response code: 503
I0602 17:24:56.535597  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5d08ec25-c063-4170-ac2f-031c9f68996d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:24:56 GMT]] Body:0xc000025740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0000e5200 TLS:<nil>}
I0602 17:24:56.535679  324763 retry.go:31] will retry after 1.121724ms: Temporary Error: unexpected response code: 503
I0602 17:24:56.541283  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f51528e7-6111-4ac7-9bd1-870169a39e8d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:24:56 GMT]] Body:0xc000598fc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000f1f200 TLS:<nil>}
I0602 17:24:56.541360  324763 retry.go:31] will retry after 1.529966ms: Temporary Error: unexpected response code: 503
I0602 17:24:56.547003  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ff05f3be-5e1d-407d-8135-1fc1e5ddd68d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:24:56 GMT]] Body:0xc000674ec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0000e5300 TLS:<nil>}
I0602 17:24:56.547088  324763 retry.go:31] will retry after 3.078972ms: Temporary Error: unexpected response code: 503
I0602 17:24:56.552995  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f9bf00ad-bb6b-4ae6-adbc-5771502e0b02] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:24:56 GMT]] Body:0xc000025ac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00101a400 TLS:<nil>}
I0602 17:24:56.553100  324763 retry.go:31] will retry after 5.854223ms: Temporary Error: unexpected response code: 503
I0602 17:24:56.562290  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3e5bc082-d8cf-4b4d-bed9-2349fe43a074] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:24:56 GMT]] Body:0xc000675000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000f1f300 TLS:<nil>}
I0602 17:24:56.562375  324763 retry.go:31] will retry after 11.362655ms: Temporary Error: unexpected response code: 503
I0602 17:24:56.577891  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f11fec7a-cdcd-44be-a6d4-c353f5bc7e27] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:24:56 GMT]] Body:0xc000599380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00101a500 TLS:<nil>}
I0602 17:24:56.577977  324763 retry.go:31] will retry after 9.267303ms: Temporary Error: unexpected response code: 503
I0602 17:24:56.635567  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[04360b42-9904-47ef-a0f9-e8be2202fda6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:24:56 GMT]] Body:0xc000599a40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0000e5400 TLS:<nil>}
I0602 17:24:56.635650  324763 retry.go:31] will retry after 17.139291ms: Temporary Error: unexpected response code: 503
I0602 17:24:56.656979  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9a89132f-12ab-4137-a585-914ebc57966e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:24:56 GMT]] Body:0xc000025c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0000e5500 TLS:<nil>}
I0602 17:24:56.657116  324763 retry.go:31] will retry after 23.881489ms: Temporary Error: unexpected response code: 503
I0602 17:24:56.735588  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1946cd08-5998-45ad-9b2e-05c17f6ca5f0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:24:56 GMT]] Body:0xc000599b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000f1f400 TLS:<nil>}
I0602 17:24:56.735673  324763 retry.go:31] will retry after 42.427055ms: Temporary Error: unexpected response code: 503
I0602 17:24:56.782132  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9cd0e570-4368-4900-a1ad-749beb958aa3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:24:56 GMT]] Body:0xc000675180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0000e5600 TLS:<nil>}
I0602 17:24:56.782200  324763 retry.go:31] will retry after 51.432832ms: Temporary Error: unexpected response code: 503
I0602 17:24:56.837764  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[445b9427-0222-4058-8b1c-adc3db9396bd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:24:56 GMT]] Body:0xc000599d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00101a600 TLS:<nil>}
I0602 17:24:56.837840  324763 retry.go:31] will retry after 78.14118ms: Temporary Error: unexpected response code: 503
I0602 17:24:56.936634  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bc534e86-4ab2-44b4-8e73-d39c14a025aa] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:24:56 GMT]] Body:0xc000025e00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0000e5700 TLS:<nil>}
I0602 17:24:56.936752  324763 retry.go:31] will retry after 174.255803ms: Temporary Error: unexpected response code: 503
I0602 17:24:57.136497  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[85bd139b-6b3a-4bee-97ba-041551a27fbf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:24:57 GMT]] Body:0xc000599ec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000f1f500 TLS:<nil>}
I0602 17:24:57.136574  324763 retry.go:31] will retry after 159.291408ms: Temporary Error: unexpected response code: 503
I0602 17:24:57.336636  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5eb81301-5cd6-4f04-81c1-9adeb1437aba] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:24:57 GMT]] Body:0xc000599fc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0000e5800 TLS:<nil>}
I0602 17:24:57.336722  324763 retry.go:31] will retry after 233.827468ms: Temporary Error: unexpected response code: 503
I0602 17:24:57.574294  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3964a833-9b68-47d4-9226-127ca20da553] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:24:57 GMT]] Body:0xc000675300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0000e5900 TLS:<nil>}
I0602 17:24:57.574380  324763 retry.go:31] will retry after 429.392365ms: Temporary Error: unexpected response code: 503
I0602 17:24:58.036202  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[524dfd35-fa65-47c4-bb7d-b2447bfc7aed] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:24:58 GMT]] Body:0xc000126080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00101a700 TLS:<nil>}
I0602 17:24:58.036283  324763 retry.go:31] will retry after 801.058534ms: Temporary Error: unexpected response code: 503
I0602 17:24:58.841272  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e5e4bf1e-e879-4ee5-be00-c5ac7796ca70] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:24:58 GMT]] Body:0xc000025f80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0000e5a00 TLS:<nil>}
I0602 17:24:58.841348  324763 retry.go:31] will retry after 1.529087469s: Temporary Error: unexpected response code: 503
I0602 17:25:00.374052  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[367e648f-369e-43cf-9ea8-50c491d41579] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:25:00 GMT]] Body:0xc000675480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000f1f600 TLS:<nil>}
I0602 17:25:00.374140  324763 retry.go:31] will retry after 1.335136154s: Temporary Error: unexpected response code: 503
I0602 17:25:01.713623  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6adc430a-973a-4046-81ca-ef5361f40b6d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:25:01 GMT]] Body:0xc000126300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00101a800 TLS:<nil>}
I0602 17:25:01.713702  324763 retry.go:31] will retry after 2.012724691s: Temporary Error: unexpected response code: 503
I0602 17:25:03.736622  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c0f067d1-86fb-4c9a-bd5e-4aae0c44bfa6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:25:03 GMT]] Body:0xc00006b0c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00111e000 TLS:<nil>}
I0602 17:25:03.736708  324763 retry.go:31] will retry after 4.744335389s: Temporary Error: unexpected response code: 503
I0602 17:25:08.484051  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[404e4762-21d3-46e1-b320-a73692c52402] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:25:08 GMT]] Body:0xc0006755c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00111e100 TLS:<nil>}
I0602 17:25:08.484113  324763 retry.go:31] will retry after 4.014454686s: Temporary Error: unexpected response code: 503
I0602 17:25:12.502974  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[00d19990-e236-4a15-b0ab-fd5843fb71f7] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:25:12 GMT]] Body:0xc0006756c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00101a900 TLS:<nil>}
I0602 17:25:12.503050  324763 retry.go:31] will retry after 11.635741654s: Temporary Error: unexpected response code: 503
I0602 17:25:24.146030  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c4a620ec-c6f0-4d16-b625-327f1c04ceee] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:25:24 GMT]] Body:0xc000126800 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000f1f700 TLS:<nil>}
I0602 17:25:24.146097  324763 retry.go:31] will retry after 15.298130033s: Temporary Error: unexpected response code: 503
I0602 17:25:39.448058  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fba2b140-f4c4-4006-b012-9da04e5de8a5] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:25:39 GMT]] Body:0xc000675780 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00111e200 TLS:<nil>}
I0602 17:25:39.448138  324763 retry.go:31] will retry after 19.631844237s: Temporary Error: unexpected response code: 503
I0602 17:25:59.084894  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a6a4e900-8b2f-4fcc-9efd-4978dd1d7813] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:25:59 GMT]] Body:0xc00006b400 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00101aa00 TLS:<nil>}
I0602 17:25:59.084987  324763 retry.go:31] will retry after 15.195386994s: Temporary Error: unexpected response code: 503
I0602 17:26:14.284472  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4a23a964-d465-438e-a7d6-36a690cb460f] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:26:14 GMT]] Body:0xc000126bc0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000f1f800 TLS:<nil>}
I0602 17:26:14.284548  324763 retry.go:31] will retry after 28.402880652s: Temporary Error: unexpected response code: 503
I0602 17:26:42.692613  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[26846f26-3979-408a-a4d5-cfeadd7cd1b2] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:26:42 GMT]] Body:0xc00006bcc0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00111e300 TLS:<nil>}
I0602 17:26:42.692679  324763 retry.go:31] will retry after 1m6.435206373s: Temporary Error: unexpected response code: 503
I0602 17:27:49.133920  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5859aa93-f6e6-4de0-887d-8416ded3747a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:27:49 GMT]] Body:0xc0005c8640 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00101a000 TLS:<nil>}
I0602 17:27:49.134016  324763 retry.go:31] will retry after 1m28.514497132s: Temporary Error: unexpected response code: 503
I0602 17:29:17.652285  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[38739871-1ff6-44e0-946f-dbcd63e39a83] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:29:17 GMT]] Body:0xc0007641c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00101ab00 TLS:<nil>}
I0602 17:29:17.652397  324763 retry.go:31] will retry after 34.767217402s: Temporary Error: unexpected response code: 503
I0602 17:29:52.424107  324763 dashboard.go:212] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[159b10c4-5697-45e3-b168-944ef3538b21] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Jun 2022 17:29:52 GMT]] Body:0xc0005c8640 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001f0200 TLS:<nil>}
I0602 17:29:52.424190  324763 retry.go:31] will retry after 1m5.688515861s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220602171905-283122
helpers_test.go:235: (dbg) docker inspect functional-20220602171905-283122:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ccf73bf4d78c7b204c2b467200b9fa8e02dddb5e9163040ed879a288e345c9be",
	        "Created": "2022-06-02T17:19:13.281758205Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 306853,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T17:19:13.665348671Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/ccf73bf4d78c7b204c2b467200b9fa8e02dddb5e9163040ed879a288e345c9be/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ccf73bf4d78c7b204c2b467200b9fa8e02dddb5e9163040ed879a288e345c9be/hostname",
	        "HostsPath": "/var/lib/docker/containers/ccf73bf4d78c7b204c2b467200b9fa8e02dddb5e9163040ed879a288e345c9be/hosts",
	        "LogPath": "/var/lib/docker/containers/ccf73bf4d78c7b204c2b467200b9fa8e02dddb5e9163040ed879a288e345c9be/ccf73bf4d78c7b204c2b467200b9fa8e02dddb5e9163040ed879a288e345c9be-json.log",
	        "Name": "/functional-20220602171905-283122",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20220602171905-283122:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20220602171905-283122",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/86fe0ac079b421c9dbe7f7740293c7eec9418fad08adbe6a17a004a9f4752e8c-init/diff:/var/lib/docker/overlay2/9517781e97ae29bbe1cf38381a601e806342524da1c5d5dc15ba54ec17f204b6/diff:/var/lib/docker/overlay2/28e14d8724bd82b9b6adde55b66e6d1f1be4fa6c3379f2f44ca1138dd648175a/diff:/var/lib/docker/overlay2/24589dfb9ddc941e7cb22b5e38dec69d1af0a29a330d344152ca7f26010354a8/diff:/var/lib/docker/overlay2/66c45d6273a3507b6e0b6b0753ec4fc82f07160fd115626e24a42bda27bbba86/diff:/var/lib/docker/overlay2/a6f0c661bd178a2a0fef034493a114067e8952b8516e69aff1e1e94a75918bbc/diff:/var/lib/docker/overlay2/5b40a045506372be84bfb01d05b068bc4cc926c5eb9e868226c4c386e8a331c7/diff:/var/lib/docker/overlay2/dbf9a9c6e59628984c95ea93ff4ada66c53b22ca3c2f910f70efba895cf40718/diff:/var/lib/docker/overlay2/5eea619393ee989a35a37a13b0633685af91729e1b74f6e73fd94b63c9d7d1fa/diff:/var/lib/docker/overlay2/24de6fbece5386c3658fa3e2c01723e6c1c24fb837d53b3d5a8c75b64ce2cd06/diff:/var/lib/docker/overlay2/04593e
f7d3bf72ee8b7439b0326af5b9818930e0dd48c35dba97e3b4ae39f4ee/diff:/var/lib/docker/overlay2/09310697cecb557085bb4d5dfc810a82fc533759ad98179263b2fc94153a64fa/diff:/var/lib/docker/overlay2/ee75c420f0615a19a5d44cfe042bdca5a2cf0f1c541168ba424140cf2aa7f98d/diff:/var/lib/docker/overlay2/fc8643a3e2060ab0365cd1227ab9ebbd5a573e6ebf8379bc6b64a756a1f66562/diff:/var/lib/docker/overlay2/0f8571cab87b35114fab495cbde9af8ac7e58be99412da82a79c90239e2227e1/diff:/var/lib/docker/overlay2/e1a5ae079e4f566081e5e786955257c23809c8719ea6b4593c8c91ac00380379/diff:/var/lib/docker/overlay2/373565e63dec12c85906e80b4a030f7d852992a756749be9424ac17802d589c0/diff:/var/lib/docker/overlay2/b886b42aa5e9f06b4381a6f58d55338774a2d02af2e91e3de156aa4bca4e4be0/diff:/var/lib/docker/overlay2/dee534a744ce54d08ca752b7f64e1772c063d4ba1e008c5a9773188bb009c377/diff:/var/lib/docker/overlay2/c8cf6667f29ffc5417675e4bd8eee6a91468de13292e3cb6ae2c70d3ad56d5fd/diff:/var/lib/docker/overlay2/072b57a4228c3928bdb27162809b102b485bb94c5f726c9a3e9892267ed30839/diff:/var/lib/d
ocker/overlay2/491d5b51bf6de1cfc4eef288c454f54e3b424e3b83fd9dc216c35891c7923f55/diff:/var/lib/docker/overlay2/78f167c2cc286e4058ef09d5a719437afddde854aac41b8d054567865fa61024/diff:/var/lib/docker/overlay2/10763b3ba26983ed9426ccca08bfceb7037d95c69827847e2ff755b4f2c5a4ab/diff:/var/lib/docker/overlay2/e559098efb21164d96bd0aee50fee9f48310df68887c5c884f4ba3f3fc895260/diff:/var/lib/docker/overlay2/555ef1505fe2b1ab4244f073e9212f269d70d8cd6ca0cd517d21cc80cd25b4c1/diff:/var/lib/docker/overlay2/0201da7c907a1f8f564cda48c3f3a403e484d901739d2584f1343a7907685c5e/diff:/var/lib/docker/overlay2/0abae5d84f8497a5114349421415431c43a34015ad98f6c9c3140143d2f1f383/diff:/var/lib/docker/overlay2/80c37982eef2dc00bd08f208551df377570134af84008156f6a500855fd2ba10/diff:/var/lib/docker/overlay2/cc38481bb11be97cd54c0032709337a491580c2f0ae6d3f7eec4c3616f8e3126/diff:/var/lib/docker/overlay2/dda978aeb4c7317bafbc077229c4417a4d8e7658d9604803c401448b471eab5e/diff:/var/lib/docker/overlay2/7c230ae0970689e2932cdaadc9e3f86ec8215fbf384c9eb59cfb958daf7
3197e/diff:/var/lib/docker/overlay2/145db51b169211e46c3bbab5ca998b2a019a4a813801acc2dae9b2dc09bc2ecc/diff:/var/lib/docker/overlay2/e604163672a013549c59bc042d1b932a3dad9e104d7079fbbb3ca24c29351c0d/diff:/var/lib/docker/overlay2/5a304d8680fd3fb594754173a70646923e2715ed21c98b8c43e1f1ef2d7abd6a/diff:/var/lib/docker/overlay2/b77f00351ddd70be2f3d9dd622b78e1a2937c4ba71585b357fef88d285eb762e/diff:/var/lib/docker/overlay2/196b6ae042b9b08748253c16f5e2b56d7f9a25077008d3d447cbc75447ed5e2a/diff:/var/lib/docker/overlay2/8c7c3fbb0bed8f7440d11104257806c70d16fd7d432311fc208d5012a439e5a1/diff:/var/lib/docker/overlay2/10604aa3b8ed5698c9fa8f55e00b9a9dc27d6b99135255d6c756fc080e615fc6/diff:/var/lib/docker/overlay2/c6602a17e9b0ab1d84297109204b48e60f293952fbc1feaf362895ec86860ead/diff:/var/lib/docker/overlay2/3828c5e50db9be2b4bc30fce040392ea735da5eaa819ad0848cb4fb79fc367da/diff:/var/lib/docker/overlay2/708816f20892c0e4b94cbe8e80b169fff54ffd870bcde395bd8163dd03b25d0f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/86fe0ac079b421c9dbe7f7740293c7eec9418fad08adbe6a17a004a9f4752e8c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/86fe0ac079b421c9dbe7f7740293c7eec9418fad08adbe6a17a004a9f4752e8c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/86fe0ac079b421c9dbe7f7740293c7eec9418fad08adbe6a17a004a9f4752e8c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20220602171905-283122",
	                "Source": "/var/lib/docker/volumes/functional-20220602171905-283122/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20220602171905-283122",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20220602171905-283122",
	                "name.minikube.sigs.k8s.io": "functional-20220602171905-283122",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0b577a7c7c0f72102b1d1c7e48acd0a27d4af6fac311a08cd9abd09a1ffd9224",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49457"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49456"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49453"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49455"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49454"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0b577a7c7c0f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20220602171905-283122": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ccf73bf4d78c",
	                        "functional-20220602171905-283122"
	                    ],
	                    "NetworkID": "b21549cf349aba8b9852bb7975ba376ac9fe089fee8b7f76e4abbd8a3c8aa318",
	                    "EndpointID": "1dd452a0f5b2f58c8dec0c5a585634fa07b5e2ec3756cb8b90f9991f5515fd22",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-20220602171905-283122 -n functional-20220602171905-283122
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-20220602171905-283122 logs -n 25: (1.52573472s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|-------------------------------------------------------------------------|----------------------------------|---------|----------------|---------------------|---------------------|
	|    Command     |                                  Args                                   |             Profile              |  User   |    Version     |     Start Time      |      End Time       |
	|----------------|-------------------------------------------------------------------------|----------------------------------|---------|----------------|---------------------|---------------------|
	| image          | functional-20220602171905-283122 image rm                               | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-20220602171905-283122 |                                  |         |                |                     |                     |
	| image          | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|                | image ls                                                                |                                  |         |                |                     |                     |
	| image          | functional-20220602171905-283122 image load                             | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar |                                  |         |                |                     |                     |
	| image          | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|                | image ls                                                                |                                  |         |                |                     |                     |
	| ssh            | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|                | ssh stat                                                                |                                  |         |                |                     |                     |
	|                | /mount-9p/created-by-test                                               |                                  |         |                |                     |                     |
	| ssh            | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|                | ssh stat                                                                |                                  |         |                |                     |                     |
	|                | /mount-9p/created-by-pod                                                |                                  |         |                |                     |                     |
	| ssh            | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|                | ssh sudo umount -f /mount-9p                                            |                                  |         |                |                     |                     |
	| image          | functional-20220602171905-283122 image save --daemon                    | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-20220602171905-283122 |                                  |         |                |                     |                     |
	| ssh            | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|                | ssh findmnt -T /mount-9p | grep                                         |                                  |         |                |                     |                     |
	|                | 9p                                                                      |                                  |         |                |                     |                     |
	| ssh            | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|                | ssh -- ls -la /mount-9p                                                 |                                  |         |                |                     |                     |
	| service        | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | service hello-node-connect --url                                        |                                  |         |                |                     |                     |
	| update-context | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | update-context --alsologtostderr                                        |                                  |         |                |                     |                     |
	|                | -v=2                                                                    |                                  |         |                |                     |                     |
	| service        | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | service list                                                            |                                  |         |                |                     |                     |
	| update-context | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | update-context --alsologtostderr                                        |                                  |         |                |                     |                     |
	|                | -v=2                                                                    |                                  |         |                |                     |                     |
	| update-context | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | update-context --alsologtostderr                                        |                                  |         |                |                     |                     |
	|                | -v=2                                                                    |                                  |         |                |                     |                     |
	| image          | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | image ls --format short                                                 |                                  |         |                |                     |                     |
	| service        | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | service --namespace=default                                             |                                  |         |                |                     |                     |
	|                | --https --url hello-node                                                |                                  |         |                |                     |                     |
	| image          | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | image ls --format yaml                                                  |                                  |         |                |                     |                     |
	| service        | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | service hello-node --url                                                |                                  |         |                |                     |                     |
	|                | --format={{.IP}}                                                        |                                  |         |                |                     |                     |
	| service        | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | service hello-node --url                                                |                                  |         |                |                     |                     |
	| image          | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | image ls --format json                                                  |                                  |         |                |                     |                     |
	| image          | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | image ls --format table                                                 |                                  |         |                |                     |                     |
	| image          | functional-20220602171905-283122 image build -t                         | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | localhost/my-image:functional-20220602171905-283122                     |                                  |         |                |                     |                     |
	|                | testdata/build                                                          |                                  |         |                |                     |                     |
	| image          | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | image ls                                                                |                                  |         |                |                     |                     |
	| logs           | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:28 UTC | 02 Jun 22 17:28 UTC |
	|                | logs -n 25                                                              |                                  |         |                |                     |                     |
	|----------------|-------------------------------------------------------------------------|----------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 17:24:54
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 17:24:54.095428  324439 out.go:296] Setting OutFile to fd 1 ...
	I0602 17:24:54.095633  324439 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:24:54.095645  324439 out.go:309] Setting ErrFile to fd 2...
	I0602 17:24:54.095651  324439 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:24:54.095773  324439 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 17:24:54.096004  324439 out.go:303] Setting JSON to false
	I0602 17:24:54.106014  324439 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7647,"bootTime":1654183047,"procs":347,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0602 17:24:54.106126  324439 start.go:125] virtualization: kvm guest
	I0602 17:24:54.109961  324439 out.go:177] * [functional-20220602171905-283122] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0602 17:24:54.112105  324439 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 17:24:54.114064  324439 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 17:24:54.115761  324439 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 17:24:54.117342  324439 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 17:24:54.118880  324439 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0602 17:24:54.120817  324439 config.go:178] Loaded profile config "functional-20220602171905-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 17:24:54.121342  324439 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 17:24:54.173331  324439 docker.go:137] docker version: linux-20.10.16
	I0602 17:24:54.173469  324439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 17:24:54.287775  324439 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:72 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:40 SystemTime:2022-06-02 17:24:54.209895397 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 17:24:54.287878  324439 docker.go:254] overlay module found
	I0602 17:24:54.291515  324439 out.go:177] * Using the docker driver based on existing profile
	I0602 17:24:54.292985  324439 start.go:284] selected driver: docker
	I0602 17:24:54.293034  324439 start.go:806] validating driver "docker" against &{Name:functional-20220602171905-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220602171905-283122 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false r
egistry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 17:24:54.293159  324439 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 17:24:54.293487  324439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 17:24:54.405188  324439 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:72 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:40 SystemTime:2022-06-02 17:24:54.3270598 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 17:24:54.405716  324439 cni.go:95] Creating CNI manager for ""
	I0602 17:24:54.405733  324439 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 17:24:54.405751  324439 start_flags.go:306] config:
	{Name:functional-20220602171905-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220602171905-283122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:tr
ue storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 17:24:54.408661  324439 out.go:177] * dry-run validation complete!
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-06-02 17:19:13 UTC, end at Thu 2022-06-02 17:29:55 UTC. --
	Jun 02 17:24:09 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:09.035835009Z" level=info msg="ignoring event" container=b2cbc18e9595c4538045222bd0302eaf7edfa2dca730e18fb44a5bfee30ac53a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:09 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:09.068429157Z" level=info msg="ignoring event" container=5353917f3d71e2c8eb1026adff6a765f04b08878565484fd6471af65d307e8dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:09 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:09.453926823Z" level=info msg="ignoring event" container=22f9f15b9a3ff477d075ef3e12b621e470e48ec1d2156cf9e4dc638719099d31 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:09 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:09.456833730Z" level=info msg="ignoring event" container=92951e242fba8b7554198442693b938b282b22612387e2389936b56444d193b4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:13 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:13.848379818Z" level=info msg="ignoring event" container=b8aba85da98e3534f56f991138893080c425d892a83538e50da55f94975e1f4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:16 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:16.867632840Z" level=info msg="ignoring event" container=9dda91ef327edf5a32279dd9fcf87a912c929e5d5110ced1b392ca4cf7782558 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:17 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:17.435758519Z" level=info msg="ignoring event" container=cd87610507c56062c12ab62a85742939cc69769b9f65a31c73d1129bba837c3b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:17 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:17.545371070Z" level=info msg="ignoring event" container=cc1cadc790059b509c3f2c1c7b79375145832a2550c869f11789ae75cb4449cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:21 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:21.954403562Z" level=info msg="ignoring event" container=109f6c67be4d3e212df55bed330bdcb97c1fc48bb17cf72d49dea2b80e9ce6b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:38 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:38.460970076Z" level=info msg="ignoring event" container=3307c17eecb9de8d9e97d99a6315ab9d68f3574d54a473c42d12721710c36d72 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:47 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:47.057501793Z" level=info msg="ignoring event" container=8217a37d80b6ae37401153b5b319dbf9996bea0661bf786a4141fe45e9ff1046 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:48 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:48.954546864Z" level=info msg="ignoring event" container=1864c7843705e7d42624a5a3646f79b3d700e7094b952e42f3d1a40fdf9f90f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:58 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:58.895733264Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Jun 02 17:25:00 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:25:00.655753956Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jun 02 17:25:06 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:25:06.848058834Z" level=info msg="ignoring event" container=6db51a182746050c3843e3ab51d8a1a6b308b7b3a127fe8e064743256262a041 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:25:06 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:25:06.860099329Z" level=info msg="ignoring event" container=95c68876429fe242bc9335c8a49466f1b50c30e6812d091795501e85226826e4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:25:07 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:25:07.246596540Z" level=info msg="ignoring event" container=50e34c87b49d21b678d1636ea47f559e8adac4a30426979f21030a3ef3d8916e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:25:10 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:25:10.060752806Z" level=info msg="ignoring event" container=3f1df46a144ff1ad941834fdf47d49dde96550c5a247b8d7506e6ca0e3f066e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:25:10 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:25:10.267081741Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
	Jun 02 17:25:31 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:25:31.504608206Z" level=info msg="ignoring event" container=4dad6a0ed29937785bce05b21e720349caa4307cc06fbb039299ebe10acc2a4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:25:51 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:25:51.479376729Z" level=info msg="ignoring event" container=02baf15680d4c1094fce37322cd4b68296dfa0b6acaf643708815296de278b11 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:26:01 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:26:01.503960587Z" level=info msg="ignoring event" container=e91b5e1a64a601fb83ea3d38d9e016b6ad5ac47e2a6e48f262771fa66e028681 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:26:45 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:26:45.509859736Z" level=info msg="ignoring event" container=bbcaf9c170212d23eb2eeaed2b42f75848498ad9ed38d28e5a78bf9394ae940a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:27:15 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:27:15.496156361Z" level=info msg="ignoring event" container=c59100e6de6358f997c75c3c6d6879a7dd58629e1cacf11068ce831654d16fc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:28:16 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:28:16.502812842Z" level=info msg="ignoring event" container=c9e9483297a4477fc703ac98e4de2dfc179cfe665422f68be0c8f2d7e30470da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                  CREATED              STATE               NAME                        ATTEMPT             POD ID
	c9e9483297a44       1042d9e0d8fcc                                                                                          About a minute ago   Exited              kubernetes-dashboard        5                   ee72ed23645d6
	c59100e6de635       6e38f40d628db                                                                                          2 minutes ago        Exited              storage-provisioner         10                  c5a904a60d2e4
	b3877ee1080c2       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   4 minutes ago        Running             dashboard-metrics-scraper   0                   2c0a51d506cbe
	9d64287582997       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969          4 minutes ago        Running             echoserver                  0                   f074952e8f7b1
	a80cdae372c72       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969          4 minutes ago        Running             echoserver                  0                   2a7e154e98709
	8217a37d80b6a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    5 minutes ago        Exited              mount-munger                0                   1864c7843705e
	9de98d2e05435       nginx@sha256:a74534e76ee1121d418fa7394ca930eb67440deda413848bc67c68138535b989                          5 minutes ago        Running             nginx                       0                   d4f8df5c33698
	6c77f1098174a       mysql@sha256:7e99b2b8d5bca914ef31059858210f57b009c40375d647f0d4d65ecd01d6b1d5                          5 minutes ago        Running             mysql                       0                   25d6e05720578
	c276864164b49       a4ca41631cc7a                                                                                          5 minutes ago        Running             coredns                     1                   fe66982393c40
	1c227889c513e       8fa62c12256df                                                                                          5 minutes ago        Running             kube-apiserver              1                   32cbbdc559078
	9dda91ef327ed       8fa62c12256df                                                                                          5 minutes ago        Exited              kube-apiserver              0                   32cbbdc559078
	5cabfd9de847a       595f327f224a4                                                                                          5 minutes ago        Running             kube-scheduler              1                   19dfc0de11563
	bc1df717c35fc       4c03754524064                                                                                          5 minutes ago        Running             kube-proxy                  1                   341b9b7854419
	6280eaa54ed8b       25f8c7f3da61c                                                                                          5 minutes ago        Running             etcd                        1                   b937529554388
	020a23185b37e       df7b72818ad2e                                                                                          5 minutes ago        Running             kube-controller-manager     1                   4d6e85f997653
	b8aba85da98e3       a4ca41631cc7a                                                                                          10 minutes ago       Exited              coredns                     0                   0544243af64e4
	4b794e67eb6e3       4c03754524064                                                                                          10 minutes ago       Exited              kube-proxy                  0                   895c379f69180
	b2cbc18e9595c       25f8c7f3da61c                                                                                          10 minutes ago       Exited              etcd                        0                   3f293bdfbc88e
	22f9f15b9a3ff       595f327f224a4                                                                                          10 minutes ago       Exited              kube-scheduler              0                   4fa66a37576fb
	5353917f3d71e       df7b72818ad2e                                                                                          10 minutes ago       Exited              kube-controller-manager     0                   ae98ee0fc94fe
	
	* 
	* ==> coredns [b8aba85da98e] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [c276864164b4] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	* 
	* ==> describe nodes <==
	* Name:               functional-20220602171905-283122
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-20220602171905-283122
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae
	                    minikube.k8s.io/name=functional-20220602171905-283122
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_02T17_19_29_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Jun 2022 17:19:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20220602171905-283122
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Jun 2022 17:29:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Jun 2022 17:25:16 +0000   Thu, 02 Jun 2022 17:19:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Jun 2022 17:25:16 +0000   Thu, 02 Jun 2022 17:19:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Jun 2022 17:25:16 +0000   Thu, 02 Jun 2022 17:19:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Jun 2022 17:25:16 +0000   Thu, 02 Jun 2022 17:24:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20220602171905-283122
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873816Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873816Ki
	  pods:               110
	System Info:
	  Machine ID:                 a34bb2508bce429bb90502b0ef044420
	  System UUID:                e5257698-d461-4a2f-b7e2-77ca6f3add35
	  Boot ID:                    eac629ea-39e3-4b75-b891-94bd750a4fe6
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-54fbb85-lqgxj                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  default                     hello-node-connect-74cf8bc446-hqrhr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  default                     mysql-b87c45988-h4hfc                                       600m (7%!)(MISSING)     700m (8%!)(MISSING)   512Mi (1%!)(MISSING)       700Mi (2%!)(MISSING)     5m23s
	  default                     nginx-svc                                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m22s
	  kube-system                 coredns-64897985d-fqvms                                     100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     10m
	  kube-system                 etcd-functional-20220602171905-283122                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kube-apiserver-functional-20220602171905-283122             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m35s
	  kube-system                 kube-controller-manager-functional-20220602171905-283122    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-x5hdb                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-functional-20220602171905-283122             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-65b4bd797-p2f56                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kubernetes-dashboard        kubernetes-dashboard-cd7c84bfc-z2z56                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%!)(MISSING)  700m (8%!)(MISSING)
	  memory             682Mi (2%!)(MISSING)   870Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 5m39s              kube-proxy  
	  Normal  Starting                 10m                kube-proxy  
	  Normal  NodeHasSufficientMemory  10m (x4 over 10m)  kubelet     Node functional-20220602171905-283122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x4 over 10m)  kubelet     Node functional-20220602171905-283122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x3 over 10m)  kubelet     Node functional-20220602171905-283122 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kubelet     Starting kubelet.
	  Normal  Starting                 10m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientPID     10m                kubelet     Node functional-20220602171905-283122 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             10m                kubelet     Node functional-20220602171905-283122 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  10m                kubelet     Node functional-20220602171905-283122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet     Node functional-20220602171905-283122 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  10m                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                10m                kubelet     Node functional-20220602171905-283122 status is now: NodeReady
	  Normal  Starting                 5m41s              kubelet     Starting kubelet.
	  Normal  NodeHasNoDiskPressure    5m41s              kubelet     Node functional-20220602171905-283122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m41s              kubelet     Node functional-20220602171905-283122 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             5m41s              kubelet     Node functional-20220602171905-283122 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  5m41s              kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m41s              kubelet     Node functional-20220602171905-283122 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  5m41s              kubelet     Node functional-20220602171905-283122 status is now: NodeHasSufficientMemory
	
	* 
	* ==> dmesg <==
	* [  +0.007890] FS-Cache: O-key=[8] 'e41b080000000000'
	[  +0.006311] FS-Cache: N-cookie c=00000000f15c64a3 [p=00000000e165c10a fl=2 nc=0 na=1]
	[  +0.009376] FS-Cache: N-cookie d=00000000c3ebe5aa n=00000000648c1af9
	[  +0.007889] FS-Cache: N-key=[8] 'e41b080000000000'
	[  +0.008619] FS-Cache: Duplicate cookie detected
	[  +0.005160] FS-Cache: O-cookie c=00000000d5f5a506 [p=00000000e165c10a fl=226 nc=0 na=1]
	[  +0.009515] FS-Cache: O-cookie d=00000000c3ebe5aa n=000000004fa77641
	[  +0.007877] FS-Cache: O-key=[8] 'e41b080000000000'
	[  +0.006282] FS-Cache: N-cookie c=000000007109d8c8 [p=00000000e165c10a fl=2 nc=0 na=1]
	[  +0.009340] FS-Cache: N-cookie d=00000000c3ebe5aa n=000000003676904e
	[  +0.007874] FS-Cache: N-key=[8] 'e41b080000000000'
	[  +3.902222] FS-Cache: Duplicate cookie detected
	[  +0.004696] FS-Cache: O-cookie c=0000000082f161a7 [p=00000000e165c10a fl=226 nc=0 na=1]
	[  +0.008176] FS-Cache: O-cookie d=00000000c3ebe5aa n=0000000036c711d9
	[  +0.006523] FS-Cache: O-key=[8] 'e11b080000000000'
	[  +0.004982] FS-Cache: N-cookie c=0000000011ca43b7 [p=00000000e165c10a fl=2 nc=0 na=1]
	[  +0.009330] FS-Cache: N-cookie d=00000000c3ebe5aa n=00000000971af6fd
	[  +0.007850] FS-Cache: N-key=[8] 'e11b080000000000'
	[  +0.446074] FS-Cache: Duplicate cookie detected
	[  +0.004669] FS-Cache: O-cookie c=000000003e4742e5 [p=00000000e165c10a fl=226 nc=0 na=1]
	[  +0.008158] FS-Cache: O-cookie d=00000000c3ebe5aa n=00000000348fa4e2
	[  +0.006491] FS-Cache: O-key=[8] 'e61b080000000000'
	[  +0.005017] FS-Cache: N-cookie c=00000000dd942d83 [p=00000000e165c10a fl=2 nc=0 na=1]
	[  +0.009347] FS-Cache: N-cookie d=00000000c3ebe5aa n=00000000b1a11134
	[  +0.007899] FS-Cache: N-key=[8] 'e61b080000000000'
	
	* 
	* ==> etcd [6280eaa54ed8] <==
	* {"level":"info","ts":"2022-06-02T17:24:10.348Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-02T17:24:10.348Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-02T17:24:10.348Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2022-06-02T17:24:10.352Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-02T17:24:10.352Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-02T17:24:10.352Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220602171905-283122 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T17:24:11.236Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-02T17:24:11.236Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-02T17:24:11.237Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-02T17:24:11.237Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2022-06-02T17:24:43.265Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"224.74551ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
	{"level":"info","ts":"2022-06-02T17:24:43.265Z","caller":"traceutil/trace.go:171","msg":"trace[1722354228] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:714; }","duration":"224.889044ms","start":"2022-06-02T17:24:43.040Z","end":"2022-06-02T17:24:43.265Z","steps":["trace[1722354228] 'agreement among raft nodes before linearized reading'  (duration: 22.479856ms)","trace[1722354228] 'range keys from in-memory index tree'  (duration: 202.182885ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-02T17:24:54.604Z","caller":"traceutil/trace.go:171","msg":"trace[1730556547] transaction","detail":"{read_only:false; response_revision:759; number_of_response:1; }","duration":"134.390908ms","start":"2022-06-02T17:24:54.469Z","end":"2022-06-02T17:24:54.604Z","steps":["trace[1730556547] 'process raft request'  (duration: 60.886017ms)","trace[1730556547] 'compare'  (duration: 73.276523ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-02T17:25:05.703Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"134.563066ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-02T17:25:05.703Z","caller":"traceutil/trace.go:171","msg":"trace[1714029383] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:874; }","duration":"134.646444ms","start":"2022-06-02T17:25:05.569Z","end":"2022-06-02T17:25:05.703Z","steps":["trace[1714029383] 'range keys from in-memory index tree'  (duration: 134.449555ms)"],"step_count":1}
	
	* 
	* ==> etcd [b2cbc18e9595] <==
	* {"level":"info","ts":"2022-06-02T17:19:22.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-02T17:19:22.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-06-02T17:19:22.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-06-02T17:19:22.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-02T17:19:22.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-06-02T17:19:22.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-02T17:19:22.948Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220602171905-283122 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-02T17:19:22.948Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T17:19:22.949Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:19:22.949Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T17:19:22.949Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-02T17:19:22.949Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-02T17:19:22.949Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:19:22.949Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:19:22.949Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:19:22.950Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-06-02T17:19:22.950Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-02T17:24:08.848Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-06-02T17:24:08.848Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20220602171905-283122","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2022/06/02 17:24:08 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/02 17:24:08 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-06-02T17:24:08.938Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2022-06-02T17:24:08.940Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-02T17:24:08.941Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-02T17:24:08.941Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20220602171905-283122","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> kernel <==
	*  17:29:56 up  2:12,  0 users,  load average: 0.15, 0.37, 0.61
	Linux functional-20220602171905-283122 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [1c227889c513] <==
	* I0602 17:24:21.338298       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0602 17:24:21.204985       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0602 17:24:21.338588       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0602 17:24:21.338599       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0602 17:24:21.339046       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0602 17:24:21.340513       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0602 17:24:22.234058       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0602 17:24:22.234100       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0602 17:24:22.239133       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0602 17:24:25.393716       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0602 17:24:26.249757       1 controller.go:611] quota admission added evaluator for: endpoints
	I0602 17:24:26.311033       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0602 17:24:33.135876       1 alloc.go:329] "allocated clusterIPs" service="default/mysql" clusterIPs=map[IPv4:10.107.133.29]
	I0602 17:24:33.151094       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0602 17:24:33.155517       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0602 17:24:33.183457       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0602 17:24:34.538059       1 alloc.go:329] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.101.23.193]
	I0602 17:24:54.146448       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.110.83.97]
	I0602 17:24:54.626286       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.109.115.66]
	I0602 17:24:55.737628       1 controller.go:611] quota admission added evaluator for: namespaces
	I0602 17:24:55.753893       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0602 17:24:55.862750       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0602 17:24:55.940640       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0602 17:24:56.252986       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.111.68.154]
	I0602 17:24:56.342959       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.96.220.201]
	
	* 
	* ==> kube-apiserver [9dda91ef327e] <==
	* I0602 17:24:16.847387       1 server.go:565] external host was not specified, using 192.168.49.2
	I0602 17:24:16.847934       1 server.go:172] Version: v1.23.6
	E0602 17:24:16.848289       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	* 
	* ==> kube-controller-manager [020a23185b37] <==
	* I0602 17:24:55.948656       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-cd7c84bfc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 17:24:55.949887       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-65b4bd797" failed with pods "dashboard-metrics-scraper-65b4bd797-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 17:24:55.949955       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-65b4bd797" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-65b4bd797-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0602 17:24:55.959646       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-cd7c84bfc-z2z56"
	I0602 17:24:56.046020       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-65b4bd797" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-65b4bd797-p2f56"
	I0602 17:25:01.752612       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:25:11.335031       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:25:26.335803       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:25:41.335966       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:25:56.336133       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:26:11.337043       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:26:26.337510       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:26:41.338482       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:26:56.339354       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:27:11.340218       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:27:26.340479       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:27:41.341361       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:27:56.342351       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:28:11.343116       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:28:26.343450       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:28:41.343821       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:28:56.344394       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:29:11.345487       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:29:26.345644       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:29:41.346488       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	
	* 
	* ==> kube-controller-manager [5353917f3d71] <==
	* I0602 17:19:41.044509       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0602 17:19:41.046329       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0602 17:19:41.054618       1 shared_informer.go:247] Caches are synced for node 
	I0602 17:19:41.054659       1 range_allocator.go:173] Starting range CIDR allocator
	I0602 17:19:41.054664       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0602 17:19:41.054673       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0602 17:19:41.059714       1 range_allocator.go:374] Set node functional-20220602171905-283122 PodCIDR to [10.244.0.0/24]
	I0602 17:19:41.095042       1 shared_informer.go:247] Caches are synced for endpoint 
	I0602 17:19:41.095495       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0602 17:19:41.139576       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0602 17:19:41.192984       1 shared_informer.go:247] Caches are synced for HPA 
	I0602 17:19:41.194248       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0602 17:19:41.248010       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 17:19:41.265878       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 17:19:41.293388       1 shared_informer.go:247] Caches are synced for disruption 
	I0602 17:19:41.293415       1 disruption.go:371] Sending events to api server.
	I0602 17:19:41.300273       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0602 17:19:41.347564       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0602 17:19:41.669206       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 17:19:41.702715       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-x5hdb"
	I0602 17:19:41.744331       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 17:19:41.744361       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0602 17:19:42.052920       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-fqvms"
	I0602 17:19:42.060506       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-bkxkg"
	I0602 17:19:42.153428       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-bkxkg"
	
	* 
	* ==> kube-proxy [4b794e67eb6e] <==
	* I0602 17:19:42.955955       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0602 17:19:42.956055       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0602 17:19:42.956101       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 17:19:43.046822       1 server_others.go:206] "Using iptables Proxier"
	I0602 17:19:43.046872       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 17:19:43.046883       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 17:19:43.046913       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 17:19:43.047532       1 server.go:656] "Version info" version="v1.23.6"
	I0602 17:19:43.048394       1 config.go:317] "Starting service config controller"
	I0602 17:19:43.048449       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 17:19:43.052555       1 config.go:226] "Starting endpoint slice config controller"
	I0602 17:19:43.052575       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 17:19:43.148657       1 shared_informer.go:247] Caches are synced for service config 
	I0602 17:19:43.153634       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-proxy [bc1df717c35f] <==
	* E0602 17:24:10.349444       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220602171905-283122": dial tcp 192.168.49.2:8441: connect: connection refused
	E0602 17:24:13.741547       1 node.go:152] Failed to retrieve node info: nodes "functional-20220602171905-283122" is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
	I0602 17:24:16.039874       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0602 17:24:16.039910       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0602 17:24:16.039948       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 17:24:16.065171       1 server_others.go:206] "Using iptables Proxier"
	I0602 17:24:16.065200       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 17:24:16.065206       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 17:24:16.065218       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 17:24:16.066337       1 server.go:656] "Version info" version="v1.23.6"
	I0602 17:24:16.067202       1 config.go:226] "Starting endpoint slice config controller"
	I0602 17:24:16.067228       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 17:24:16.067249       1 config.go:317] "Starting service config controller"
	I0602 17:24:16.067256       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 17:24:16.167335       1 shared_informer.go:247] Caches are synced for service config 
	I0602 17:24:16.167819       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [22f9f15b9a3f] <==
	* E0602 17:19:26.035397       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0602 17:19:26.841490       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0602 17:19:26.841523       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0602 17:19:26.896763       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0602 17:19:26.896815       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0602 17:19:26.938966       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0602 17:19:26.939009       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0602 17:19:26.944131       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0602 17:19:26.944164       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0602 17:19:26.968866       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0602 17:19:26.968913       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0602 17:19:27.035032       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0602 17:19:27.035075       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0602 17:19:27.035098       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0602 17:19:27.035079       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0602 17:19:27.124268       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0602 17:19:27.124307       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0602 17:19:27.235858       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0602 17:19:27.235899       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0602 17:19:27.273385       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0602 17:19:27.273429       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0602 17:19:29.756149       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0602 17:24:08.848174       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0602 17:24:08.849027       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0602 17:24:08.849168       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	
	* 
	* ==> kube-scheduler [5cabfd9de847] <==
	* W0602 17:24:13.643183       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0602 17:24:13.643220       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0602 17:24:13.643231       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0602 17:24:13.643240       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0602 17:24:13.741838       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
	I0602 17:24:13.743930       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0602 17:24:13.744002       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0602 17:24:13.744024       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0602 17:24:13.744186       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0602 17:24:13.845159       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0602 17:24:21.257394       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)
	E0602 17:24:21.257493       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
	E0602 17:24:21.257587       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	E0602 17:24:21.257643       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: unknown (get pods)
	E0602 17:24:21.258016       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
	E0602 17:24:21.258123       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
	E0602 17:24:21.258145       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: unknown (get namespaces)
	E0602 17:24:21.258349       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
	E0602 17:24:21.258485       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: unknown (get services)
	E0602 17:24:21.258571       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
	E0602 17:24:21.258629       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
	E0602 17:24:21.258667       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
	E0602 17:24:21.261209       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	E0602 17:24:21.339173       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
	E0602 17:24:21.346397       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-06-02 17:19:13 UTC, end at Thu 2022-06-02 17:29:56 UTC. --
	Jun 02 17:28:37 functional-20220602171905-283122 kubelet[7096]: E0602 17:28:37.353559    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-cd7c84bfc-z2z56_kubernetes-dashboard(156e21db-09c3-4506-be4c-8217d476cb07)\"" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-z2z56" podUID=156e21db-09c3-4506-be4c-8217d476cb07
	Jun 02 17:28:43 functional-20220602171905-283122 kubelet[7096]: I0602 17:28:43.353511    7096 scope.go:110] "RemoveContainer" containerID="c59100e6de6358f997c75c3c6d6879a7dd58629e1cacf11068ce831654d16fc6"
	Jun 02 17:28:43 functional-20220602171905-283122 kubelet[7096]: E0602 17:28:43.353807    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4)\"" pod="kube-system/storage-provisioner" podUID=ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4
	Jun 02 17:28:48 functional-20220602171905-283122 kubelet[7096]: I0602 17:28:48.353383    7096 scope.go:110] "RemoveContainer" containerID="c9e9483297a4477fc703ac98e4de2dfc179cfe665422f68be0c8f2d7e30470da"
	Jun 02 17:28:48 functional-20220602171905-283122 kubelet[7096]: E0602 17:28:48.353739    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-cd7c84bfc-z2z56_kubernetes-dashboard(156e21db-09c3-4506-be4c-8217d476cb07)\"" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-z2z56" podUID=156e21db-09c3-4506-be4c-8217d476cb07
	Jun 02 17:28:56 functional-20220602171905-283122 kubelet[7096]: I0602 17:28:56.353579    7096 scope.go:110] "RemoveContainer" containerID="c59100e6de6358f997c75c3c6d6879a7dd58629e1cacf11068ce831654d16fc6"
	Jun 02 17:28:56 functional-20220602171905-283122 kubelet[7096]: E0602 17:28:56.353823    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4)\"" pod="kube-system/storage-provisioner" podUID=ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4
	Jun 02 17:29:02 functional-20220602171905-283122 kubelet[7096]: I0602 17:29:02.353154    7096 scope.go:110] "RemoveContainer" containerID="c9e9483297a4477fc703ac98e4de2dfc179cfe665422f68be0c8f2d7e30470da"
	Jun 02 17:29:02 functional-20220602171905-283122 kubelet[7096]: E0602 17:29:02.353451    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-cd7c84bfc-z2z56_kubernetes-dashboard(156e21db-09c3-4506-be4c-8217d476cb07)\"" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-z2z56" podUID=156e21db-09c3-4506-be4c-8217d476cb07
	Jun 02 17:29:08 functional-20220602171905-283122 kubelet[7096]: I0602 17:29:08.352824    7096 scope.go:110] "RemoveContainer" containerID="c59100e6de6358f997c75c3c6d6879a7dd58629e1cacf11068ce831654d16fc6"
	Jun 02 17:29:08 functional-20220602171905-283122 kubelet[7096]: E0602 17:29:08.353102    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4)\"" pod="kube-system/storage-provisioner" podUID=ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4
	Jun 02 17:29:14 functional-20220602171905-283122 kubelet[7096]: I0602 17:29:14.353122    7096 scope.go:110] "RemoveContainer" containerID="c9e9483297a4477fc703ac98e4de2dfc179cfe665422f68be0c8f2d7e30470da"
	Jun 02 17:29:14 functional-20220602171905-283122 kubelet[7096]: E0602 17:29:14.353428    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-cd7c84bfc-z2z56_kubernetes-dashboard(156e21db-09c3-4506-be4c-8217d476cb07)\"" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-z2z56" podUID=156e21db-09c3-4506-be4c-8217d476cb07
	Jun 02 17:29:21 functional-20220602171905-283122 kubelet[7096]: I0602 17:29:21.353259    7096 scope.go:110] "RemoveContainer" containerID="c59100e6de6358f997c75c3c6d6879a7dd58629e1cacf11068ce831654d16fc6"
	Jun 02 17:29:21 functional-20220602171905-283122 kubelet[7096]: E0602 17:29:21.353477    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4)\"" pod="kube-system/storage-provisioner" podUID=ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4
	Jun 02 17:29:28 functional-20220602171905-283122 kubelet[7096]: I0602 17:29:28.352723    7096 scope.go:110] "RemoveContainer" containerID="c9e9483297a4477fc703ac98e4de2dfc179cfe665422f68be0c8f2d7e30470da"
	Jun 02 17:29:28 functional-20220602171905-283122 kubelet[7096]: E0602 17:29:28.353073    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-cd7c84bfc-z2z56_kubernetes-dashboard(156e21db-09c3-4506-be4c-8217d476cb07)\"" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-z2z56" podUID=156e21db-09c3-4506-be4c-8217d476cb07
	Jun 02 17:29:34 functional-20220602171905-283122 kubelet[7096]: I0602 17:29:34.352933    7096 scope.go:110] "RemoveContainer" containerID="c59100e6de6358f997c75c3c6d6879a7dd58629e1cacf11068ce831654d16fc6"
	Jun 02 17:29:34 functional-20220602171905-283122 kubelet[7096]: E0602 17:29:34.353226    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4)\"" pod="kube-system/storage-provisioner" podUID=ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4
	Jun 02 17:29:43 functional-20220602171905-283122 kubelet[7096]: I0602 17:29:43.353112    7096 scope.go:110] "RemoveContainer" containerID="c9e9483297a4477fc703ac98e4de2dfc179cfe665422f68be0c8f2d7e30470da"
	Jun 02 17:29:43 functional-20220602171905-283122 kubelet[7096]: E0602 17:29:43.353416    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-cd7c84bfc-z2z56_kubernetes-dashboard(156e21db-09c3-4506-be4c-8217d476cb07)\"" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-z2z56" podUID=156e21db-09c3-4506-be4c-8217d476cb07
	Jun 02 17:29:48 functional-20220602171905-283122 kubelet[7096]: I0602 17:29:48.353095    7096 scope.go:110] "RemoveContainer" containerID="c59100e6de6358f997c75c3c6d6879a7dd58629e1cacf11068ce831654d16fc6"
	Jun 02 17:29:48 functional-20220602171905-283122 kubelet[7096]: E0602 17:29:48.353357    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4)\"" pod="kube-system/storage-provisioner" podUID=ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4
	Jun 02 17:29:54 functional-20220602171905-283122 kubelet[7096]: I0602 17:29:54.352902    7096 scope.go:110] "RemoveContainer" containerID="c9e9483297a4477fc703ac98e4de2dfc179cfe665422f68be0c8f2d7e30470da"
	Jun 02 17:29:54 functional-20220602171905-283122 kubelet[7096]: E0602 17:29:54.353269    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-cd7c84bfc-z2z56_kubernetes-dashboard(156e21db-09c3-4506-be4c-8217d476cb07)\"" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-z2z56" podUID=156e21db-09c3-4506-be4c-8217d476cb07
	
	* 
	* ==> kubernetes-dashboard [c9e9483297a4] <==
	* 2022/06/02 17:28:16 Using namespace: kubernetes-dashboard
	2022/06/02 17:28:16 Using in-cluster config to connect to apiserver
	2022/06/02 17:28:16 Using secret token for csrf signing
	2022/06/02 17:28:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/02 17:28:16 Starting overwatch
	panic: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc00055faf0)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x30e
	github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66
	github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc0001c0080)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:527 +0x94
	github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0x19f098c)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:495 +0x32
	github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:594
	main.main()
		/home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:95 +0x1cf
	
	* 
	* ==> storage-provisioner [c59100e6de63] <==
	* I0602 17:27:15.478364       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0602 17:27:15.481178       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-20220602171905-283122 -n functional-20220602171905-283122
helpers_test.go:261: (dbg) Run:  kubectl --context functional-20220602171905-283122 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox-mount
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context functional-20220602171905-283122 describe pod busybox-mount
helpers_test.go:280: (dbg) kubectl --context functional-20220602171905-283122 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:         busybox-mount
	Namespace:    default
	Priority:     0
	Node:         functional-20220602171905-283122/192.168.49.2
	Start Time:   Thu, 02 Jun 2022 17:24:38 +0000
	Labels:       integration-test=busybox-mount
	Annotations:  <none>
	Status:       Succeeded
	IP:           172.17.0.5
	IPs:
	  IP:  172.17.0.5
	Containers:
	  mount-munger:
	    Container ID:  docker://8217a37d80b6ae37401153b5b319dbf9996bea0661bf786a4141fe45e9ff1046
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 02 Jun 2022 17:24:46 +0000
	      Finished:     Thu, 02 Jun 2022 17:24:47 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tc2jq (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-tc2jq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m18s  default-scheduler  Successfully assigned default/busybox-mount to functional-20220602171905-283122
	  Normal  Pulling    5m18s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m11s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 6.995722807s
	  Normal  Created    5m11s  kubelet            Created container mount-munger
	  Normal  Started    5m11s  kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:283: <<< TestFunctional/parallel/DashboardCmd FAILED: end of post-mortem logs <<<
helpers_test.go:284: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/DashboardCmd (302.61s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (194.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008074169s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220602171905-283122 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220602171905-283122 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220602171905-283122 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220602171905-283122 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220602171905-283122 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220602171905-283122 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220602171905-283122 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220602171905-283122 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220602171905-283122 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220602171905-283122 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220602171905-283122 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220602171905-283122 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220602171905-283122 get pvc myclaim -o=json
functional_test_pvc_test.go:92: failed to check storage phase: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:8c2d086e-b1f1-4465-9dd9-e15fd5df601c ResourceVersion:871 Generation:0 CreationTimestamp:2022-06-02 17:25:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ZZZ_DeprecatedClusterName: ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0015982e0 VolumeMode:0xc0015982f0 DataSource:nil DataSourceRef:nil} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] ResizeStatus:<nil>}})
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220602171905-283122
helpers_test.go:235: (dbg) docker inspect functional-20220602171905-283122:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ccf73bf4d78c7b204c2b467200b9fa8e02dddb5e9163040ed879a288e345c9be",
	        "Created": "2022-06-02T17:19:13.281758205Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 306853,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T17:19:13.665348671Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/ccf73bf4d78c7b204c2b467200b9fa8e02dddb5e9163040ed879a288e345c9be/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ccf73bf4d78c7b204c2b467200b9fa8e02dddb5e9163040ed879a288e345c9be/hostname",
	        "HostsPath": "/var/lib/docker/containers/ccf73bf4d78c7b204c2b467200b9fa8e02dddb5e9163040ed879a288e345c9be/hosts",
	        "LogPath": "/var/lib/docker/containers/ccf73bf4d78c7b204c2b467200b9fa8e02dddb5e9163040ed879a288e345c9be/ccf73bf4d78c7b204c2b467200b9fa8e02dddb5e9163040ed879a288e345c9be-json.log",
	        "Name": "/functional-20220602171905-283122",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20220602171905-283122:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20220602171905-283122",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/86fe0ac079b421c9dbe7f7740293c7eec9418fad08adbe6a17a004a9f4752e8c-init/diff:/var/lib/docker/overlay2/9517781e97ae29bbe1cf38381a601e806342524da1c5d5dc15ba54ec17f204b6/diff:/var/lib/docker/overlay2/28e14d8724bd82b9b6adde55b66e6d1f1be4fa6c3379f2f44ca1138dd648175a/diff:/var/lib/docker/overlay2/24589dfb9ddc941e7cb22b5e38dec69d1af0a29a330d344152ca7f26010354a8/diff:/var/lib/docker/overlay2/66c45d6273a3507b6e0b6b0753ec4fc82f07160fd115626e24a42bda27bbba86/diff:/var/lib/docker/overlay2/a6f0c661bd178a2a0fef034493a114067e8952b8516e69aff1e1e94a75918bbc/diff:/var/lib/docker/overlay2/5b40a045506372be84bfb01d05b068bc4cc926c5eb9e868226c4c386e8a331c7/diff:/var/lib/docker/overlay2/dbf9a9c6e59628984c95ea93ff4ada66c53b22ca3c2f910f70efba895cf40718/diff:/var/lib/docker/overlay2/5eea619393ee989a35a37a13b0633685af91729e1b74f6e73fd94b63c9d7d1fa/diff:/var/lib/docker/overlay2/24de6fbece5386c3658fa3e2c01723e6c1c24fb837d53b3d5a8c75b64ce2cd06/diff:/var/lib/docker/overlay2/04593e
f7d3bf72ee8b7439b0326af5b9818930e0dd48c35dba97e3b4ae39f4ee/diff:/var/lib/docker/overlay2/09310697cecb557085bb4d5dfc810a82fc533759ad98179263b2fc94153a64fa/diff:/var/lib/docker/overlay2/ee75c420f0615a19a5d44cfe042bdca5a2cf0f1c541168ba424140cf2aa7f98d/diff:/var/lib/docker/overlay2/fc8643a3e2060ab0365cd1227ab9ebbd5a573e6ebf8379bc6b64a756a1f66562/diff:/var/lib/docker/overlay2/0f8571cab87b35114fab495cbde9af8ac7e58be99412da82a79c90239e2227e1/diff:/var/lib/docker/overlay2/e1a5ae079e4f566081e5e786955257c23809c8719ea6b4593c8c91ac00380379/diff:/var/lib/docker/overlay2/373565e63dec12c85906e80b4a030f7d852992a756749be9424ac17802d589c0/diff:/var/lib/docker/overlay2/b886b42aa5e9f06b4381a6f58d55338774a2d02af2e91e3de156aa4bca4e4be0/diff:/var/lib/docker/overlay2/dee534a744ce54d08ca752b7f64e1772c063d4ba1e008c5a9773188bb009c377/diff:/var/lib/docker/overlay2/c8cf6667f29ffc5417675e4bd8eee6a91468de13292e3cb6ae2c70d3ad56d5fd/diff:/var/lib/docker/overlay2/072b57a4228c3928bdb27162809b102b485bb94c5f726c9a3e9892267ed30839/diff:/var/lib/d
ocker/overlay2/491d5b51bf6de1cfc4eef288c454f54e3b424e3b83fd9dc216c35891c7923f55/diff:/var/lib/docker/overlay2/78f167c2cc286e4058ef09d5a719437afddde854aac41b8d054567865fa61024/diff:/var/lib/docker/overlay2/10763b3ba26983ed9426ccca08bfceb7037d95c69827847e2ff755b4f2c5a4ab/diff:/var/lib/docker/overlay2/e559098efb21164d96bd0aee50fee9f48310df68887c5c884f4ba3f3fc895260/diff:/var/lib/docker/overlay2/555ef1505fe2b1ab4244f073e9212f269d70d8cd6ca0cd517d21cc80cd25b4c1/diff:/var/lib/docker/overlay2/0201da7c907a1f8f564cda48c3f3a403e484d901739d2584f1343a7907685c5e/diff:/var/lib/docker/overlay2/0abae5d84f8497a5114349421415431c43a34015ad98f6c9c3140143d2f1f383/diff:/var/lib/docker/overlay2/80c37982eef2dc00bd08f208551df377570134af84008156f6a500855fd2ba10/diff:/var/lib/docker/overlay2/cc38481bb11be97cd54c0032709337a491580c2f0ae6d3f7eec4c3616f8e3126/diff:/var/lib/docker/overlay2/dda978aeb4c7317bafbc077229c4417a4d8e7658d9604803c401448b471eab5e/diff:/var/lib/docker/overlay2/7c230ae0970689e2932cdaadc9e3f86ec8215fbf384c9eb59cfb958daf7
3197e/diff:/var/lib/docker/overlay2/145db51b169211e46c3bbab5ca998b2a019a4a813801acc2dae9b2dc09bc2ecc/diff:/var/lib/docker/overlay2/e604163672a013549c59bc042d1b932a3dad9e104d7079fbbb3ca24c29351c0d/diff:/var/lib/docker/overlay2/5a304d8680fd3fb594754173a70646923e2715ed21c98b8c43e1f1ef2d7abd6a/diff:/var/lib/docker/overlay2/b77f00351ddd70be2f3d9dd622b78e1a2937c4ba71585b357fef88d285eb762e/diff:/var/lib/docker/overlay2/196b6ae042b9b08748253c16f5e2b56d7f9a25077008d3d447cbc75447ed5e2a/diff:/var/lib/docker/overlay2/8c7c3fbb0bed8f7440d11104257806c70d16fd7d432311fc208d5012a439e5a1/diff:/var/lib/docker/overlay2/10604aa3b8ed5698c9fa8f55e00b9a9dc27d6b99135255d6c756fc080e615fc6/diff:/var/lib/docker/overlay2/c6602a17e9b0ab1d84297109204b48e60f293952fbc1feaf362895ec86860ead/diff:/var/lib/docker/overlay2/3828c5e50db9be2b4bc30fce040392ea735da5eaa819ad0848cb4fb79fc367da/diff:/var/lib/docker/overlay2/708816f20892c0e4b94cbe8e80b169fff54ffd870bcde395bd8163dd03b25d0f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/86fe0ac079b421c9dbe7f7740293c7eec9418fad08adbe6a17a004a9f4752e8c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/86fe0ac079b421c9dbe7f7740293c7eec9418fad08adbe6a17a004a9f4752e8c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/86fe0ac079b421c9dbe7f7740293c7eec9418fad08adbe6a17a004a9f4752e8c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20220602171905-283122",
	                "Source": "/var/lib/docker/volumes/functional-20220602171905-283122/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20220602171905-283122",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20220602171905-283122",
	                "name.minikube.sigs.k8s.io": "functional-20220602171905-283122",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0b577a7c7c0f72102b1d1c7e48acd0a27d4af6fac311a08cd9abd09a1ffd9224",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49457"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49456"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49453"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49455"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49454"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0b577a7c7c0f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20220602171905-283122": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ccf73bf4d78c",
	                        "functional-20220602171905-283122"
	                    ],
	                    "NetworkID": "b21549cf349aba8b9852bb7975ba376ac9fe089fee8b7f76e4abbd8a3c8aa318",
	                    "EndpointID": "1dd452a0f5b2f58c8dec0c5a585634fa07b5e2ec3756cb8b90f9991f5515fd22",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-20220602171905-283122 -n functional-20220602171905-283122
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-20220602171905-283122 logs -n 25: (1.447293975s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|-------------------------------------------------------------------------|----------------------------------|---------|----------------|---------------------|---------------------|
	|    Command     |                                  Args                                   |             Profile              |  User   |    Version     |     Start Time      |      End Time       |
	|----------------|-------------------------------------------------------------------------|----------------------------------|---------|----------------|---------------------|---------------------|
	| image          | functional-20220602171905-283122 image save                             | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-20220602171905-283122 |                                  |         |                |                     |                     |
	|                | /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar |                                  |         |                |                     |                     |
	| image          | functional-20220602171905-283122 image rm                               | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-20220602171905-283122 |                                  |         |                |                     |                     |
	| image          | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|                | image ls                                                                |                                  |         |                |                     |                     |
	| image          | functional-20220602171905-283122 image load                             | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar |                                  |         |                |                     |                     |
	| image          | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|                | image ls                                                                |                                  |         |                |                     |                     |
	| ssh            | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|                | ssh stat                                                                |                                  |         |                |                     |                     |
	|                | /mount-9p/created-by-test                                               |                                  |         |                |                     |                     |
	| ssh            | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|                | ssh stat                                                                |                                  |         |                |                     |                     |
	|                | /mount-9p/created-by-pod                                                |                                  |         |                |                     |                     |
	| ssh            | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|                | ssh sudo umount -f /mount-9p                                            |                                  |         |                |                     |                     |
	| image          | functional-20220602171905-283122 image save --daemon                    | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-20220602171905-283122 |                                  |         |                |                     |                     |
	| ssh            | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|                | ssh findmnt -T /mount-9p | grep                                         |                                  |         |                |                     |                     |
	|                | 9p                                                                      |                                  |         |                |                     |                     |
	| ssh            | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:24 UTC | 02 Jun 22 17:24 UTC |
	|                | ssh -- ls -la /mount-9p                                                 |                                  |         |                |                     |                     |
	| service        | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | service hello-node-connect --url                                        |                                  |         |                |                     |                     |
	| update-context | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | update-context --alsologtostderr                                        |                                  |         |                |                     |                     |
	|                | -v=2                                                                    |                                  |         |                |                     |                     |
	| service        | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | service list                                                            |                                  |         |                |                     |                     |
	| update-context | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | update-context --alsologtostderr                                        |                                  |         |                |                     |                     |
	|                | -v=2                                                                    |                                  |         |                |                     |                     |
	| update-context | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | update-context --alsologtostderr                                        |                                  |         |                |                     |                     |
	|                | -v=2                                                                    |                                  |         |                |                     |                     |
	| image          | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | image ls --format short                                                 |                                  |         |                |                     |                     |
	| service        | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | service --namespace=default                                             |                                  |         |                |                     |                     |
	|                | --https --url hello-node                                                |                                  |         |                |                     |                     |
	| image          | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | image ls --format yaml                                                  |                                  |         |                |                     |                     |
	| service        | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | service hello-node --url                                                |                                  |         |                |                     |                     |
	|                | --format={{.IP}}                                                        |                                  |         |                |                     |                     |
	| service        | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | service hello-node --url                                                |                                  |         |                |                     |                     |
	| image          | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | image ls --format json                                                  |                                  |         |                |                     |                     |
	| image          | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | image ls --format table                                                 |                                  |         |                |                     |                     |
	| image          | functional-20220602171905-283122 image build -t                         | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | localhost/my-image:functional-20220602171905-283122                     |                                  |         |                |                     |                     |
	|                | testdata/build                                                          |                                  |         |                |                     |                     |
	| image          | functional-20220602171905-283122                                        | functional-20220602171905-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:25 UTC | 02 Jun 22 17:25 UTC |
	|                | image ls                                                                |                                  |         |                |                     |                     |
	|----------------|-------------------------------------------------------------------------|----------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 17:24:54
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 17:24:54.095428  324439 out.go:296] Setting OutFile to fd 1 ...
	I0602 17:24:54.095633  324439 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:24:54.095645  324439 out.go:309] Setting ErrFile to fd 2...
	I0602 17:24:54.095651  324439 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:24:54.095773  324439 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 17:24:54.096004  324439 out.go:303] Setting JSON to false
	I0602 17:24:54.106014  324439 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7647,"bootTime":1654183047,"procs":347,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0602 17:24:54.106126  324439 start.go:125] virtualization: kvm guest
	I0602 17:24:54.109961  324439 out.go:177] * [functional-20220602171905-283122] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0602 17:24:54.112105  324439 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 17:24:54.114064  324439 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 17:24:54.115761  324439 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 17:24:54.117342  324439 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 17:24:54.118880  324439 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0602 17:24:54.120817  324439 config.go:178] Loaded profile config "functional-20220602171905-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 17:24:54.121342  324439 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 17:24:54.173331  324439 docker.go:137] docker version: linux-20.10.16
	I0602 17:24:54.173469  324439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 17:24:54.287775  324439 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:72 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:40 SystemTime:2022-06-02 17:24:54.209895397 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 17:24:54.287878  324439 docker.go:254] overlay module found
	I0602 17:24:54.291515  324439 out.go:177] * Using the docker driver based on existing profile
	I0602 17:24:54.292985  324439 start.go:284] selected driver: docker
	I0602 17:24:54.293034  324439 start.go:806] validating driver "docker" against &{Name:functional-20220602171905-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220602171905-283122 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false r
egistry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 17:24:54.293159  324439 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 17:24:54.293487  324439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 17:24:54.405188  324439 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:72 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:40 SystemTime:2022-06-02 17:24:54.3270598 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Client
Info:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 17:24:54.405716  324439 cni.go:95] Creating CNI manager for ""
	I0602 17:24:54.405733  324439 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 17:24:54.405751  324439 start_flags.go:306] config:
	{Name:functional-20220602171905-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220602171905-283122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:tr
ue storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 17:24:54.408661  324439 out.go:177] * dry-run validation complete!
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-06-02 17:19:13 UTC, end at Thu 2022-06-02 17:28:09 UTC. --
	Jun 02 17:24:09 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:09.034031933Z" level=info msg="ignoring event" container=4b794e67eb6e3a8f715632f21fed13811501658f42db268e7aff5e09a7d0dd3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:09 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:09.035835009Z" level=info msg="ignoring event" container=b2cbc18e9595c4538045222bd0302eaf7edfa2dca730e18fb44a5bfee30ac53a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:09 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:09.068429157Z" level=info msg="ignoring event" container=5353917f3d71e2c8eb1026adff6a765f04b08878565484fd6471af65d307e8dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:09 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:09.453926823Z" level=info msg="ignoring event" container=22f9f15b9a3ff477d075ef3e12b621e470e48ec1d2156cf9e4dc638719099d31 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:09 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:09.456833730Z" level=info msg="ignoring event" container=92951e242fba8b7554198442693b938b282b22612387e2389936b56444d193b4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:13 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:13.848379818Z" level=info msg="ignoring event" container=b8aba85da98e3534f56f991138893080c425d892a83538e50da55f94975e1f4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:16 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:16.867632840Z" level=info msg="ignoring event" container=9dda91ef327edf5a32279dd9fcf87a912c929e5d5110ced1b392ca4cf7782558 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:17 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:17.435758519Z" level=info msg="ignoring event" container=cd87610507c56062c12ab62a85742939cc69769b9f65a31c73d1129bba837c3b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:17 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:17.545371070Z" level=info msg="ignoring event" container=cc1cadc790059b509c3f2c1c7b79375145832a2550c869f11789ae75cb4449cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:21 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:21.954403562Z" level=info msg="ignoring event" container=109f6c67be4d3e212df55bed330bdcb97c1fc48bb17cf72d49dea2b80e9ce6b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:38 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:38.460970076Z" level=info msg="ignoring event" container=3307c17eecb9de8d9e97d99a6315ab9d68f3574d54a473c42d12721710c36d72 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:47 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:47.057501793Z" level=info msg="ignoring event" container=8217a37d80b6ae37401153b5b319dbf9996bea0661bf786a4141fe45e9ff1046 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:48 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:48.954546864Z" level=info msg="ignoring event" container=1864c7843705e7d42624a5a3646f79b3d700e7094b952e42f3d1a40fdf9f90f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:24:58 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:24:58.895733264Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Jun 02 17:25:00 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:25:00.655753956Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jun 02 17:25:06 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:25:06.848058834Z" level=info msg="ignoring event" container=6db51a182746050c3843e3ab51d8a1a6b308b7b3a127fe8e064743256262a041 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:25:06 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:25:06.860099329Z" level=info msg="ignoring event" container=95c68876429fe242bc9335c8a49466f1b50c30e6812d091795501e85226826e4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:25:07 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:25:07.246596540Z" level=info msg="ignoring event" container=50e34c87b49d21b678d1636ea47f559e8adac4a30426979f21030a3ef3d8916e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:25:10 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:25:10.060752806Z" level=info msg="ignoring event" container=3f1df46a144ff1ad941834fdf47d49dde96550c5a247b8d7506e6ca0e3f066e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:25:10 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:25:10.267081741Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
	Jun 02 17:25:31 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:25:31.504608206Z" level=info msg="ignoring event" container=4dad6a0ed29937785bce05b21e720349caa4307cc06fbb039299ebe10acc2a4a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:25:51 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:25:51.479376729Z" level=info msg="ignoring event" container=02baf15680d4c1094fce37322cd4b68296dfa0b6acaf643708815296de278b11 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:26:01 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:26:01.503960587Z" level=info msg="ignoring event" container=e91b5e1a64a601fb83ea3d38d9e016b6ad5ac47e2a6e48f262771fa66e028681 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:26:45 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:26:45.509859736Z" level=info msg="ignoring event" container=bbcaf9c170212d23eb2eeaed2b42f75848498ad9ed38d28e5a78bf9394ae940a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 02 17:27:15 functional-20220602171905-283122 dockerd[495]: time="2022-06-02T17:27:15.496156361Z" level=info msg="ignoring event" container=c59100e6de6358f997c75c3c6d6879a7dd58629e1cacf11068ce831654d16fc6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                  CREATED              STATE               NAME                        ATTEMPT             POD ID
	c59100e6de635       6e38f40d628db                                                                                          54 seconds ago       Exited              storage-provisioner         10                  c5a904a60d2e4
	bbcaf9c170212       1042d9e0d8fcc                                                                                          About a minute ago   Exited              kubernetes-dashboard        4                   ee72ed23645d6
	b3877ee1080c2       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   3 minutes ago        Running             dashboard-metrics-scraper   0                   2c0a51d506cbe
	9d64287582997       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969          3 minutes ago        Running             echoserver                  0                   f074952e8f7b1
	a80cdae372c72       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969          3 minutes ago        Running             echoserver                  0                   2a7e154e98709
	8217a37d80b6a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    3 minutes ago        Exited              mount-munger                0                   1864c7843705e
	9de98d2e05435       nginx@sha256:a74534e76ee1121d418fa7394ca930eb67440deda413848bc67c68138535b989                          3 minutes ago        Running             nginx                       0                   d4f8df5c33698
	6c77f1098174a       mysql@sha256:7e99b2b8d5bca914ef31059858210f57b009c40375d647f0d4d65ecd01d6b1d5                          3 minutes ago        Running             mysql                       0                   25d6e05720578
	c276864164b49       a4ca41631cc7a                                                                                          3 minutes ago        Running             coredns                     1                   fe66982393c40
	1c227889c513e       8fa62c12256df                                                                                          3 minutes ago        Running             kube-apiserver              1                   32cbbdc559078
	9dda91ef327ed       8fa62c12256df                                                                                          3 minutes ago        Exited              kube-apiserver              0                   32cbbdc559078
	5cabfd9de847a       595f327f224a4                                                                                          4 minutes ago        Running             kube-scheduler              1                   19dfc0de11563
	bc1df717c35fc       4c03754524064                                                                                          4 minutes ago        Running             kube-proxy                  1                   341b9b7854419
	6280eaa54ed8b       25f8c7f3da61c                                                                                          4 minutes ago        Running             etcd                        1                   b937529554388
	020a23185b37e       df7b72818ad2e                                                                                          4 minutes ago        Running             kube-controller-manager     1                   4d6e85f997653
	b8aba85da98e3       a4ca41631cc7a                                                                                          8 minutes ago        Exited              coredns                     0                   0544243af64e4
	4b794e67eb6e3       4c03754524064                                                                                          8 minutes ago        Exited              kube-proxy                  0                   895c379f69180
	b2cbc18e9595c       25f8c7f3da61c                                                                                          8 minutes ago        Exited              etcd                        0                   3f293bdfbc88e
	22f9f15b9a3ff       595f327f224a4                                                                                          8 minutes ago        Exited              kube-scheduler              0                   4fa66a37576fb
	5353917f3d71e       df7b72818ad2e                                                                                          8 minutes ago        Exited              kube-controller-manager     0                   ae98ee0fc94fe
	
	* 
	* ==> coredns [b8aba85da98e] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes,kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [c276864164b4] <==
	* linux/amd64, go1.17.1, 13a9191
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	* 
	* ==> describe nodes <==
	* Name:               functional-20220602171905-283122
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-20220602171905-283122
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae
	                    minikube.k8s.io/name=functional-20220602171905-283122
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_02T17_19_29_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Jun 2022 17:19:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20220602171905-283122
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Jun 2022 17:28:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Jun 2022 17:25:16 +0000   Thu, 02 Jun 2022 17:19:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Jun 2022 17:25:16 +0000   Thu, 02 Jun 2022 17:19:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Jun 2022 17:25:16 +0000   Thu, 02 Jun 2022 17:19:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Jun 2022 17:25:16 +0000   Thu, 02 Jun 2022 17:24:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20220602171905-283122
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873816Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873816Ki
	  pods:               110
	System Info:
	  Machine ID:                 a34bb2508bce429bb90502b0ef044420
	  System UUID:                e5257698-d461-4a2f-b7e2-77ca6f3add35
	  Boot ID:                    eac629ea-39e3-4b75-b891-94bd750a4fe6
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-54fbb85-lqgxj                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m15s
	  default                     hello-node-connect-74cf8bc446-hqrhr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m15s
	  default                     mysql-b87c45988-h4hfc                                       600m (7%!)(MISSING)     700m (8%!)(MISSING)   512Mi (1%!)(MISSING)       700Mi (2%!)(MISSING)     3m36s
	  default                     nginx-svc                                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 coredns-64897985d-fqvms                                     100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     8m27s
	  kube-system                 etcd-functional-20220602171905-283122                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         8m39s
	  kube-system                 kube-apiserver-functional-20220602171905-283122             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 kube-controller-manager-functional-20220602171905-283122    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m39s
	  kube-system                 kube-proxy-x5hdb                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m28s
	  kube-system                 kube-scheduler-functional-20220602171905-283122             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m39s
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m26s
	  kubernetes-dashboard        dashboard-metrics-scraper-65b4bd797-p2f56                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m13s
	  kubernetes-dashboard        kubernetes-dashboard-cd7c84bfc-z2z56                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%!)(MISSING)  700m (8%!)(MISSING)
	  memory             682Mi (2%!)(MISSING)   870Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 3m53s                  kube-proxy  
	  Normal  Starting                 8m26s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  8m48s (x4 over 8m48s)  kubelet     Node functional-20220602171905-283122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m48s (x4 over 8m48s)  kubelet     Node functional-20220602171905-283122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m48s (x3 over 8m48s)  kubelet     Node functional-20220602171905-283122 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m48s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 8m48s                  kubelet     Starting kubelet.
	  Normal  Starting                 8m40s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientPID     8m40s                  kubelet     Node functional-20220602171905-283122 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             8m40s                  kubelet     Node functional-20220602171905-283122 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  8m40s                  kubelet     Node functional-20220602171905-283122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m40s                  kubelet     Node functional-20220602171905-283122 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  8m40s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m30s                  kubelet     Node functional-20220602171905-283122 status is now: NodeReady
	  Normal  Starting                 3m54s                  kubelet     Starting kubelet.
	  Normal  NodeHasNoDiskPressure    3m54s                  kubelet     Node functional-20220602171905-283122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s                  kubelet     Node functional-20220602171905-283122 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3m54s                  kubelet     Node functional-20220602171905-283122 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3m54s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m54s                  kubelet     Node functional-20220602171905-283122 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m54s                  kubelet     Node functional-20220602171905-283122 status is now: NodeHasSufficientMemory
	
	* 
	* ==> dmesg <==
	* [  +0.007890] FS-Cache: O-key=[8] 'e41b080000000000'
	[  +0.006311] FS-Cache: N-cookie c=00000000f15c64a3 [p=00000000e165c10a fl=2 nc=0 na=1]
	[  +0.009376] FS-Cache: N-cookie d=00000000c3ebe5aa n=00000000648c1af9
	[  +0.007889] FS-Cache: N-key=[8] 'e41b080000000000'
	[  +0.008619] FS-Cache: Duplicate cookie detected
	[  +0.005160] FS-Cache: O-cookie c=00000000d5f5a506 [p=00000000e165c10a fl=226 nc=0 na=1]
	[  +0.009515] FS-Cache: O-cookie d=00000000c3ebe5aa n=000000004fa77641
	[  +0.007877] FS-Cache: O-key=[8] 'e41b080000000000'
	[  +0.006282] FS-Cache: N-cookie c=000000007109d8c8 [p=00000000e165c10a fl=2 nc=0 na=1]
	[  +0.009340] FS-Cache: N-cookie d=00000000c3ebe5aa n=000000003676904e
	[  +0.007874] FS-Cache: N-key=[8] 'e41b080000000000'
	[  +3.902222] FS-Cache: Duplicate cookie detected
	[  +0.004696] FS-Cache: O-cookie c=0000000082f161a7 [p=00000000e165c10a fl=226 nc=0 na=1]
	[  +0.008176] FS-Cache: O-cookie d=00000000c3ebe5aa n=0000000036c711d9
	[  +0.006523] FS-Cache: O-key=[8] 'e11b080000000000'
	[  +0.004982] FS-Cache: N-cookie c=0000000011ca43b7 [p=00000000e165c10a fl=2 nc=0 na=1]
	[  +0.009330] FS-Cache: N-cookie d=00000000c3ebe5aa n=00000000971af6fd
	[  +0.007850] FS-Cache: N-key=[8] 'e11b080000000000'
	[  +0.446074] FS-Cache: Duplicate cookie detected
	[  +0.004669] FS-Cache: O-cookie c=000000003e4742e5 [p=00000000e165c10a fl=226 nc=0 na=1]
	[  +0.008158] FS-Cache: O-cookie d=00000000c3ebe5aa n=00000000348fa4e2
	[  +0.006491] FS-Cache: O-key=[8] 'e61b080000000000'
	[  +0.005017] FS-Cache: N-cookie c=00000000dd942d83 [p=00000000e165c10a fl=2 nc=0 na=1]
	[  +0.009347] FS-Cache: N-cookie d=00000000c3ebe5aa n=00000000b1a11134
	[  +0.007899] FS-Cache: N-key=[8] 'e61b080000000000'
	
	* 
	* ==> etcd [6280eaa54ed8] <==
	* {"level":"info","ts":"2022-06-02T17:24:10.348Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-02T17:24:10.348Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-02T17:24:10.348Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2022-06-02T17:24:10.352Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-02T17:24:10.352Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-02T17:24:10.352Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220602171905-283122 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T17:24:11.235Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T17:24:11.236Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-02T17:24:11.236Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-02T17:24:11.237Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-02T17:24:11.237Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2022-06-02T17:24:43.265Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"224.74551ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
	{"level":"info","ts":"2022-06-02T17:24:43.265Z","caller":"traceutil/trace.go:171","msg":"trace[1722354228] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:714; }","duration":"224.889044ms","start":"2022-06-02T17:24:43.040Z","end":"2022-06-02T17:24:43.265Z","steps":["trace[1722354228] 'agreement among raft nodes before linearized reading'  (duration: 22.479856ms)","trace[1722354228] 'range keys from in-memory index tree'  (duration: 202.182885ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-02T17:24:54.604Z","caller":"traceutil/trace.go:171","msg":"trace[1730556547] transaction","detail":"{read_only:false; response_revision:759; number_of_response:1; }","duration":"134.390908ms","start":"2022-06-02T17:24:54.469Z","end":"2022-06-02T17:24:54.604Z","steps":["trace[1730556547] 'process raft request'  (duration: 60.886017ms)","trace[1730556547] 'compare'  (duration: 73.276523ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-02T17:25:05.703Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"134.563066ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-02T17:25:05.703Z","caller":"traceutil/trace.go:171","msg":"trace[1714029383] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:874; }","duration":"134.646444ms","start":"2022-06-02T17:25:05.569Z","end":"2022-06-02T17:25:05.703Z","steps":["trace[1714029383] 'range keys from in-memory index tree'  (duration: 134.449555ms)"],"step_count":1}
	
	* 
	* ==> etcd [b2cbc18e9595] <==
	* {"level":"info","ts":"2022-06-02T17:19:22.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-02T17:19:22.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-06-02T17:19:22.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-06-02T17:19:22.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-02T17:19:22.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-06-02T17:19:22.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-02T17:19:22.948Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220602171905-283122 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-02T17:19:22.948Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T17:19:22.949Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:19:22.949Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T17:19:22.949Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-02T17:19:22.949Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-02T17:19:22.949Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:19:22.949Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:19:22.949Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:19:22.950Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-06-02T17:19:22.950Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-02T17:24:08.848Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-06-02T17:24:08.848Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20220602171905-283122","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2022/06/02 17:24:08 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/06/02 17:24:08 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-06-02T17:24:08.938Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2022-06-02T17:24:08.940Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-02T17:24:08.941Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-02T17:24:08.941Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20220602171905-283122","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> kernel <==
	*  17:28:09 up  2:10,  0 users,  load average: 0.17, 0.48, 0.68
	Linux functional-20220602171905-283122 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [1c227889c513] <==
	* I0602 17:24:21.338298       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0602 17:24:21.204985       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0602 17:24:21.338588       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0602 17:24:21.338599       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0602 17:24:21.339046       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0602 17:24:21.340513       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0602 17:24:22.234058       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0602 17:24:22.234100       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0602 17:24:22.239133       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0602 17:24:25.393716       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0602 17:24:26.249757       1 controller.go:611] quota admission added evaluator for: endpoints
	I0602 17:24:26.311033       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0602 17:24:33.135876       1 alloc.go:329] "allocated clusterIPs" service="default/mysql" clusterIPs=map[IPv4:10.107.133.29]
	I0602 17:24:33.151094       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0602 17:24:33.155517       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0602 17:24:33.183457       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0602 17:24:34.538059       1 alloc.go:329] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.101.23.193]
	I0602 17:24:54.146448       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.110.83.97]
	I0602 17:24:54.626286       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.109.115.66]
	I0602 17:24:55.737628       1 controller.go:611] quota admission added evaluator for: namespaces
	I0602 17:24:55.753893       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0602 17:24:55.862750       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0602 17:24:55.940640       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0602 17:24:56.252986       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.111.68.154]
	I0602 17:24:56.342959       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.96.220.201]
	
	* 
	* ==> kube-apiserver [9dda91ef327e] <==
	* I0602 17:24:16.847387       1 server.go:565] external host was not specified, using 192.168.49.2
	I0602 17:24:16.847934       1 server.go:172] Version: v1.23.6
	E0602 17:24:16.848289       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	* 
	* ==> kube-controller-manager [020a23185b37] <==
	* I0602 17:24:55.934639       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-cd7c84bfc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 17:24:55.935249       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-65b4bd797" failed with pods "dashboard-metrics-scraper-65b4bd797-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 17:24:55.935402       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-65b4bd797" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-65b4bd797-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 17:24:55.939149       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-65b4bd797" failed with pods "dashboard-metrics-scraper-65b4bd797-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 17:24:55.939211       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-65b4bd797" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-65b4bd797-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 17:24:55.939379       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" failed with pods "kubernetes-dashboard-cd7c84bfc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0602 17:24:55.948650       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" failed with pods "kubernetes-dashboard-cd7c84bfc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 17:24:55.948656       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-cd7c84bfc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0602 17:24:55.949887       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-65b4bd797" failed with pods "dashboard-metrics-scraper-65b4bd797-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0602 17:24:55.949955       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-65b4bd797" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-65b4bd797-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0602 17:24:55.959646       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-cd7c84bfc-z2z56"
	I0602 17:24:56.046020       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-65b4bd797" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-65b4bd797-p2f56"
	I0602 17:25:01.752612       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:25:11.335031       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:25:26.335803       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:25:41.335966       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:25:56.336133       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:26:11.337043       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:26:26.337510       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:26:41.338482       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:26:56.339354       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:27:11.340218       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:27:26.340479       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:27:41.341361       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0602 17:27:56.342351       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	
	* 
	* ==> kube-controller-manager [5353917f3d71] <==
	* I0602 17:19:41.044509       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0602 17:19:41.046329       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0602 17:19:41.054618       1 shared_informer.go:247] Caches are synced for node 
	I0602 17:19:41.054659       1 range_allocator.go:173] Starting range CIDR allocator
	I0602 17:19:41.054664       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0602 17:19:41.054673       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0602 17:19:41.059714       1 range_allocator.go:374] Set node functional-20220602171905-283122 PodCIDR to [10.244.0.0/24]
	I0602 17:19:41.095042       1 shared_informer.go:247] Caches are synced for endpoint 
	I0602 17:19:41.095495       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0602 17:19:41.139576       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0602 17:19:41.192984       1 shared_informer.go:247] Caches are synced for HPA 
	I0602 17:19:41.194248       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0602 17:19:41.248010       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 17:19:41.265878       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 17:19:41.293388       1 shared_informer.go:247] Caches are synced for disruption 
	I0602 17:19:41.293415       1 disruption.go:371] Sending events to api server.
	I0602 17:19:41.300273       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0602 17:19:41.347564       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0602 17:19:41.669206       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 17:19:41.702715       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-x5hdb"
	I0602 17:19:41.744331       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 17:19:41.744361       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0602 17:19:42.052920       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-fqvms"
	I0602 17:19:42.060506       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-bkxkg"
	I0602 17:19:42.153428       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-bkxkg"
	
	* 
	* ==> kube-proxy [4b794e67eb6e] <==
	* I0602 17:19:42.955955       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0602 17:19:42.956055       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0602 17:19:42.956101       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 17:19:43.046822       1 server_others.go:206] "Using iptables Proxier"
	I0602 17:19:43.046872       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 17:19:43.046883       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 17:19:43.046913       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 17:19:43.047532       1 server.go:656] "Version info" version="v1.23.6"
	I0602 17:19:43.048394       1 config.go:317] "Starting service config controller"
	I0602 17:19:43.048449       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 17:19:43.052555       1 config.go:226] "Starting endpoint slice config controller"
	I0602 17:19:43.052575       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 17:19:43.148657       1 shared_informer.go:247] Caches are synced for service config 
	I0602 17:19:43.153634       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-proxy [bc1df717c35f] <==
	* E0602 17:24:10.349444       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220602171905-283122": dial tcp 192.168.49.2:8441: connect: connection refused
	E0602 17:24:13.741547       1 node.go:152] Failed to retrieve node info: nodes "functional-20220602171905-283122" is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
	I0602 17:24:16.039874       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0602 17:24:16.039910       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0602 17:24:16.039948       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 17:24:16.065171       1 server_others.go:206] "Using iptables Proxier"
	I0602 17:24:16.065200       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 17:24:16.065206       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 17:24:16.065218       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 17:24:16.066337       1 server.go:656] "Version info" version="v1.23.6"
	I0602 17:24:16.067202       1 config.go:226] "Starting endpoint slice config controller"
	I0602 17:24:16.067228       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 17:24:16.067249       1 config.go:317] "Starting service config controller"
	I0602 17:24:16.067256       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 17:24:16.167335       1 shared_informer.go:247] Caches are synced for service config 
	I0602 17:24:16.167819       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [22f9f15b9a3f] <==
	* E0602 17:19:26.035397       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0602 17:19:26.841490       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0602 17:19:26.841523       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0602 17:19:26.896763       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0602 17:19:26.896815       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0602 17:19:26.938966       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0602 17:19:26.939009       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0602 17:19:26.944131       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0602 17:19:26.944164       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0602 17:19:26.968866       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0602 17:19:26.968913       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0602 17:19:27.035032       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0602 17:19:27.035075       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0602 17:19:27.035098       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0602 17:19:27.035079       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0602 17:19:27.124268       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0602 17:19:27.124307       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0602 17:19:27.235858       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0602 17:19:27.235899       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0602 17:19:27.273385       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0602 17:19:27.273429       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0602 17:19:29.756149       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0602 17:24:08.848174       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0602 17:24:08.849027       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0602 17:24:08.849168       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	
	* 
	* ==> kube-scheduler [5cabfd9de847] <==
	* W0602 17:24:13.643183       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0602 17:24:13.643220       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0602 17:24:13.643231       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0602 17:24:13.643240       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0602 17:24:13.741838       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
	I0602 17:24:13.743930       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0602 17:24:13.744002       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0602 17:24:13.744024       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0602 17:24:13.744186       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0602 17:24:13.845159       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0602 17:24:21.257394       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: unknown (get nodes)
	E0602 17:24:21.257493       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
	E0602 17:24:21.257587       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	E0602 17:24:21.257643       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: unknown (get pods)
	E0602 17:24:21.258016       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
	E0602 17:24:21.258123       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
	E0602 17:24:21.258145       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: unknown (get namespaces)
	E0602 17:24:21.258349       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
	E0602 17:24:21.258485       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: unknown (get services)
	E0602 17:24:21.258571       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
	E0602 17:24:21.258629       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
	E0602 17:24:21.258667       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
	E0602 17:24:21.261209       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	E0602 17:24:21.339173       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
	E0602 17:24:21.346397       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-06-02 17:19:13 UTC, end at Thu 2022-06-02 17:28:10 UTC. --
	Jun 02 17:26:55 functional-20220602171905-283122 kubelet[7096]: E0602 17:26:55.966017    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-cd7c84bfc-z2z56_kubernetes-dashboard(156e21db-09c3-4506-be4c-8217d476cb07)\"" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-z2z56" podUID=156e21db-09c3-4506-be4c-8217d476cb07
	Jun 02 17:27:03 functional-20220602171905-283122 kubelet[7096]: I0602 17:27:03.353232    7096 scope.go:110] "RemoveContainer" containerID="02baf15680d4c1094fce37322cd4b68296dfa0b6acaf643708815296de278b11"
	Jun 02 17:27:03 functional-20220602171905-283122 kubelet[7096]: E0602 17:27:03.353484    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4)\"" pod="kube-system/storage-provisioner" podUID=ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4
	Jun 02 17:27:08 functional-20220602171905-283122 kubelet[7096]: I0602 17:27:08.353451    7096 scope.go:110] "RemoveContainer" containerID="bbcaf9c170212d23eb2eeaed2b42f75848498ad9ed38d28e5a78bf9394ae940a"
	Jun 02 17:27:08 functional-20220602171905-283122 kubelet[7096]: E0602 17:27:08.353744    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-cd7c84bfc-z2z56_kubernetes-dashboard(156e21db-09c3-4506-be4c-8217d476cb07)\"" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-z2z56" podUID=156e21db-09c3-4506-be4c-8217d476cb07
	Jun 02 17:27:15 functional-20220602171905-283122 kubelet[7096]: I0602 17:27:15.353092    7096 scope.go:110] "RemoveContainer" containerID="02baf15680d4c1094fce37322cd4b68296dfa0b6acaf643708815296de278b11"
	Jun 02 17:27:16 functional-20220602171905-283122 kubelet[7096]: I0602 17:27:16.235079    7096 scope.go:110] "RemoveContainer" containerID="02baf15680d4c1094fce37322cd4b68296dfa0b6acaf643708815296de278b11"
	Jun 02 17:27:16 functional-20220602171905-283122 kubelet[7096]: I0602 17:27:16.235456    7096 scope.go:110] "RemoveContainer" containerID="c59100e6de6358f997c75c3c6d6879a7dd58629e1cacf11068ce831654d16fc6"
	Jun 02 17:27:16 functional-20220602171905-283122 kubelet[7096]: E0602 17:27:16.235730    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4)\"" pod="kube-system/storage-provisioner" podUID=ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4
	Jun 02 17:27:22 functional-20220602171905-283122 kubelet[7096]: I0602 17:27:22.353258    7096 scope.go:110] "RemoveContainer" containerID="bbcaf9c170212d23eb2eeaed2b42f75848498ad9ed38d28e5a78bf9394ae940a"
	Jun 02 17:27:22 functional-20220602171905-283122 kubelet[7096]: E0602 17:27:22.353558    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-cd7c84bfc-z2z56_kubernetes-dashboard(156e21db-09c3-4506-be4c-8217d476cb07)\"" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-z2z56" podUID=156e21db-09c3-4506-be4c-8217d476cb07
	Jun 02 17:27:27 functional-20220602171905-283122 kubelet[7096]: I0602 17:27:27.352996    7096 scope.go:110] "RemoveContainer" containerID="c59100e6de6358f997c75c3c6d6879a7dd58629e1cacf11068ce831654d16fc6"
	Jun 02 17:27:27 functional-20220602171905-283122 kubelet[7096]: E0602 17:27:27.353315    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4)\"" pod="kube-system/storage-provisioner" podUID=ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4
	Jun 02 17:27:36 functional-20220602171905-283122 kubelet[7096]: I0602 17:27:36.353001    7096 scope.go:110] "RemoveContainer" containerID="bbcaf9c170212d23eb2eeaed2b42f75848498ad9ed38d28e5a78bf9394ae940a"
	Jun 02 17:27:36 functional-20220602171905-283122 kubelet[7096]: E0602 17:27:36.353411    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-cd7c84bfc-z2z56_kubernetes-dashboard(156e21db-09c3-4506-be4c-8217d476cb07)\"" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-z2z56" podUID=156e21db-09c3-4506-be4c-8217d476cb07
	Jun 02 17:27:41 functional-20220602171905-283122 kubelet[7096]: I0602 17:27:41.352687    7096 scope.go:110] "RemoveContainer" containerID="c59100e6de6358f997c75c3c6d6879a7dd58629e1cacf11068ce831654d16fc6"
	Jun 02 17:27:41 functional-20220602171905-283122 kubelet[7096]: E0602 17:27:41.352944    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4)\"" pod="kube-system/storage-provisioner" podUID=ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4
	Jun 02 17:27:50 functional-20220602171905-283122 kubelet[7096]: I0602 17:27:50.352781    7096 scope.go:110] "RemoveContainer" containerID="bbcaf9c170212d23eb2eeaed2b42f75848498ad9ed38d28e5a78bf9394ae940a"
	Jun 02 17:27:50 functional-20220602171905-283122 kubelet[7096]: E0602 17:27:50.353115    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-cd7c84bfc-z2z56_kubernetes-dashboard(156e21db-09c3-4506-be4c-8217d476cb07)\"" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-z2z56" podUID=156e21db-09c3-4506-be4c-8217d476cb07
	Jun 02 17:27:54 functional-20220602171905-283122 kubelet[7096]: I0602 17:27:54.353281    7096 scope.go:110] "RemoveContainer" containerID="c59100e6de6358f997c75c3c6d6879a7dd58629e1cacf11068ce831654d16fc6"
	Jun 02 17:27:54 functional-20220602171905-283122 kubelet[7096]: E0602 17:27:54.353592    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4)\"" pod="kube-system/storage-provisioner" podUID=ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4
	Jun 02 17:28:01 functional-20220602171905-283122 kubelet[7096]: I0602 17:28:01.353624    7096 scope.go:110] "RemoveContainer" containerID="bbcaf9c170212d23eb2eeaed2b42f75848498ad9ed38d28e5a78bf9394ae940a"
	Jun 02 17:28:01 functional-20220602171905-283122 kubelet[7096]: E0602 17:28:01.353926    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-cd7c84bfc-z2z56_kubernetes-dashboard(156e21db-09c3-4506-be4c-8217d476cb07)\"" pod="kubernetes-dashboard/kubernetes-dashboard-cd7c84bfc-z2z56" podUID=156e21db-09c3-4506-be4c-8217d476cb07
	Jun 02 17:28:06 functional-20220602171905-283122 kubelet[7096]: I0602 17:28:06.352587    7096 scope.go:110] "RemoveContainer" containerID="c59100e6de6358f997c75c3c6d6879a7dd58629e1cacf11068ce831654d16fc6"
	Jun 02 17:28:06 functional-20220602171905-283122 kubelet[7096]: E0602 17:28:06.352825    7096 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4)\"" pod="kube-system/storage-provisioner" podUID=ea9aec47-fb0f-47d2-bf85-0cc1ed5773c4
	
	* 
	* ==> kubernetes-dashboard [bbcaf9c17021] <==
	* 2022/06/02 17:26:45 Using namespace: kubernetes-dashboard
	2022/06/02 17:26:45 Using in-cluster config to connect to apiserver
	2022/06/02 17:26:45 Using secret token for csrf signing
	2022/06/02 17:26:45 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/06/02 17:26:45 Starting overwatch
	panic: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc00055faf0)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x30e
	github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66
	github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc000240080)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:527 +0x94
	github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0x19f098c)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:495 +0x32
	github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
		/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:594
	main.main()
		/home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:95 +0x1cf
	
	* 
	* ==> storage-provisioner [c59100e6de63] <==
	* I0602 17:27:15.478364       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0602 17:27:15.481178       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-20220602171905-283122 -n functional-20220602171905-283122
helpers_test.go:261: (dbg) Run:  kubectl --context functional-20220602171905-283122 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox-mount
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context functional-20220602171905-283122 describe pod busybox-mount
helpers_test.go:280: (dbg) kubectl --context functional-20220602171905-283122 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:         busybox-mount
	Namespace:    default
	Priority:     0
	Node:         functional-20220602171905-283122/192.168.49.2
	Start Time:   Thu, 02 Jun 2022 17:24:38 +0000
	Labels:       integration-test=busybox-mount
	Annotations:  <none>
	Status:       Succeeded
	IP:           172.17.0.5
	IPs:
	  IP:  172.17.0.5
	Containers:
	  mount-munger:
	    Container ID:  docker://8217a37d80b6ae37401153b5b319dbf9996bea0661bf786a4141fe45e9ff1046
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 02 Jun 2022 17:24:46 +0000
	      Finished:     Thu, 02 Jun 2022 17:24:47 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tc2jq (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-tc2jq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  3m32s  default-scheduler  Successfully assigned default/busybox-mount to functional-20220602171905-283122
	  Normal  Pulling    3m31s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3m24s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 6.995722807s
	  Normal  Created    3m24s  kubelet            Created container mount-munger
	  Normal  Started    3m24s  kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:283: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:284: ---------------------/post-mortem---------------------------------
E0602 17:28:55.531692  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (194.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (366.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- rollout status deployment/busybox: (2.2728099s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- exec busybox-7978565885-2cv69 -- nslookup kubernetes.io
E0602 17:37:17.004734  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
E0602 17:37:29.867351  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
multinode_test.go:510: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- exec busybox-7978565885-2cv69 -- nslookup kubernetes.io: exit status 1 (1m0.241887127s)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.io'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:512: Pod busybox-7978565885-2cv69 could not resolve 'kubernetes.io': exit status 1
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- exec busybox-7978565885-tq8p2 -- nslookup kubernetes.io
E0602 17:38:51.789156  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
E0602 17:38:55.532102  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
multinode_test.go:510: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- exec busybox-7978565885-tq8p2 -- nslookup kubernetes.io: exit status 1 (1m0.25912815s)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.io'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:512: Pod busybox-7978565885-tq8p2 could not resolve 'kubernetes.io': exit status 1
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- exec busybox-7978565885-2cv69 -- nslookup kubernetes.default
E0602 17:39:33.161269  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
E0602 17:40:00.845203  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
multinode_test.go:520: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- exec busybox-7978565885-2cv69 -- nslookup kubernetes.default: exit status 1 (1m0.243780362s)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:522: Pod busybox-7978565885-2cv69 could not resolve 'kubernetes.default': exit status 1
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- exec busybox-7978565885-tq8p2 -- nslookup kubernetes.default
E0602 17:41:07.943698  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
multinode_test.go:520: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- exec busybox-7978565885-tq8p2 -- nslookup kubernetes.default: exit status 1 (1m0.237075653s)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:522: Pod busybox-7978565885-tq8p2 could not resolve 'kubernetes.default': exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- exec busybox-7978565885-2cv69 -- nslookup kubernetes.default.svc.cluster.local
E0602 17:41:35.630097  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- exec busybox-7978565885-2cv69 -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (1m0.246190894s)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:530: Pod busybox-7978565885-2cv69 could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- exec busybox-7978565885-tq8p2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- exec busybox-7978565885-tq8p2 -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (1m0.357514675s)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:530: Pod busybox-7978565885-tq8p2 could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220602173558-283122
helpers_test.go:235: (dbg) docker inspect multinode-20220602173558-283122:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "96c9bda474fd4a427ecb8f5b9f99b269e586546e76098bc8dfc3d3b9a5f8cd9a",
	        "Created": "2022-06-02T17:36:06.204587009Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 375463,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T17:36:06.557905139Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/96c9bda474fd4a427ecb8f5b9f99b269e586546e76098bc8dfc3d3b9a5f8cd9a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/96c9bda474fd4a427ecb8f5b9f99b269e586546e76098bc8dfc3d3b9a5f8cd9a/hostname",
	        "HostsPath": "/var/lib/docker/containers/96c9bda474fd4a427ecb8f5b9f99b269e586546e76098bc8dfc3d3b9a5f8cd9a/hosts",
	        "LogPath": "/var/lib/docker/containers/96c9bda474fd4a427ecb8f5b9f99b269e586546e76098bc8dfc3d3b9a5f8cd9a/96c9bda474fd4a427ecb8f5b9f99b269e586546e76098bc8dfc3d3b9a5f8cd9a-json.log",
	        "Name": "/multinode-20220602173558-283122",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-20220602173558-283122:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-20220602173558-283122",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7a6d0a94f3e52abf75f98ca4ad59837768bc78ac8576764715d5c8ae531a7346-init/diff:/var/lib/docker/overlay2/9517781e97ae29bbe1cf38381a601e806342524da1c5d5dc15ba54ec17f204b6/diff:/var/lib/docker/overlay2/28e14d8724bd82b9b6adde55b66e6d1f1be4fa6c3379f2f44ca1138dd648175a/diff:/var/lib/docker/overlay2/24589dfb9ddc941e7cb22b5e38dec69d1af0a29a330d344152ca7f26010354a8/diff:/var/lib/docker/overlay2/66c45d6273a3507b6e0b6b0753ec4fc82f07160fd115626e24a42bda27bbba86/diff:/var/lib/docker/overlay2/a6f0c661bd178a2a0fef034493a114067e8952b8516e69aff1e1e94a75918bbc/diff:/var/lib/docker/overlay2/5b40a045506372be84bfb01d05b068bc4cc926c5eb9e868226c4c386e8a331c7/diff:/var/lib/docker/overlay2/dbf9a9c6e59628984c95ea93ff4ada66c53b22ca3c2f910f70efba895cf40718/diff:/var/lib/docker/overlay2/5eea619393ee989a35a37a13b0633685af91729e1b74f6e73fd94b63c9d7d1fa/diff:/var/lib/docker/overlay2/24de6fbece5386c3658fa3e2c01723e6c1c24fb837d53b3d5a8c75b64ce2cd06/diff:/var/lib/docker/overlay2/04593e
f7d3bf72ee8b7439b0326af5b9818930e0dd48c35dba97e3b4ae39f4ee/diff:/var/lib/docker/overlay2/09310697cecb557085bb4d5dfc810a82fc533759ad98179263b2fc94153a64fa/diff:/var/lib/docker/overlay2/ee75c420f0615a19a5d44cfe042bdca5a2cf0f1c541168ba424140cf2aa7f98d/diff:/var/lib/docker/overlay2/fc8643a3e2060ab0365cd1227ab9ebbd5a573e6ebf8379bc6b64a756a1f66562/diff:/var/lib/docker/overlay2/0f8571cab87b35114fab495cbde9af8ac7e58be99412da82a79c90239e2227e1/diff:/var/lib/docker/overlay2/e1a5ae079e4f566081e5e786955257c23809c8719ea6b4593c8c91ac00380379/diff:/var/lib/docker/overlay2/373565e63dec12c85906e80b4a030f7d852992a756749be9424ac17802d589c0/diff:/var/lib/docker/overlay2/b886b42aa5e9f06b4381a6f58d55338774a2d02af2e91e3de156aa4bca4e4be0/diff:/var/lib/docker/overlay2/dee534a744ce54d08ca752b7f64e1772c063d4ba1e008c5a9773188bb009c377/diff:/var/lib/docker/overlay2/c8cf6667f29ffc5417675e4bd8eee6a91468de13292e3cb6ae2c70d3ad56d5fd/diff:/var/lib/docker/overlay2/072b57a4228c3928bdb27162809b102b485bb94c5f726c9a3e9892267ed30839/diff:/var/lib/d
ocker/overlay2/491d5b51bf6de1cfc4eef288c454f54e3b424e3b83fd9dc216c35891c7923f55/diff:/var/lib/docker/overlay2/78f167c2cc286e4058ef09d5a719437afddde854aac41b8d054567865fa61024/diff:/var/lib/docker/overlay2/10763b3ba26983ed9426ccca08bfceb7037d95c69827847e2ff755b4f2c5a4ab/diff:/var/lib/docker/overlay2/e559098efb21164d96bd0aee50fee9f48310df68887c5c884f4ba3f3fc895260/diff:/var/lib/docker/overlay2/555ef1505fe2b1ab4244f073e9212f269d70d8cd6ca0cd517d21cc80cd25b4c1/diff:/var/lib/docker/overlay2/0201da7c907a1f8f564cda48c3f3a403e484d901739d2584f1343a7907685c5e/diff:/var/lib/docker/overlay2/0abae5d84f8497a5114349421415431c43a34015ad98f6c9c3140143d2f1f383/diff:/var/lib/docker/overlay2/80c37982eef2dc00bd08f208551df377570134af84008156f6a500855fd2ba10/diff:/var/lib/docker/overlay2/cc38481bb11be97cd54c0032709337a491580c2f0ae6d3f7eec4c3616f8e3126/diff:/var/lib/docker/overlay2/dda978aeb4c7317bafbc077229c4417a4d8e7658d9604803c401448b471eab5e/diff:/var/lib/docker/overlay2/7c230ae0970689e2932cdaadc9e3f86ec8215fbf384c9eb59cfb958daf7
3197e/diff:/var/lib/docker/overlay2/145db51b169211e46c3bbab5ca998b2a019a4a813801acc2dae9b2dc09bc2ecc/diff:/var/lib/docker/overlay2/e604163672a013549c59bc042d1b932a3dad9e104d7079fbbb3ca24c29351c0d/diff:/var/lib/docker/overlay2/5a304d8680fd3fb594754173a70646923e2715ed21c98b8c43e1f1ef2d7abd6a/diff:/var/lib/docker/overlay2/b77f00351ddd70be2f3d9dd622b78e1a2937c4ba71585b357fef88d285eb762e/diff:/var/lib/docker/overlay2/196b6ae042b9b08748253c16f5e2b56d7f9a25077008d3d447cbc75447ed5e2a/diff:/var/lib/docker/overlay2/8c7c3fbb0bed8f7440d11104257806c70d16fd7d432311fc208d5012a439e5a1/diff:/var/lib/docker/overlay2/10604aa3b8ed5698c9fa8f55e00b9a9dc27d6b99135255d6c756fc080e615fc6/diff:/var/lib/docker/overlay2/c6602a17e9b0ab1d84297109204b48e60f293952fbc1feaf362895ec86860ead/diff:/var/lib/docker/overlay2/3828c5e50db9be2b4bc30fce040392ea735da5eaa819ad0848cb4fb79fc367da/diff:/var/lib/docker/overlay2/708816f20892c0e4b94cbe8e80b169fff54ffd870bcde395bd8163dd03b25d0f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7a6d0a94f3e52abf75f98ca4ad59837768bc78ac8576764715d5c8ae531a7346/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7a6d0a94f3e52abf75f98ca4ad59837768bc78ac8576764715d5c8ae531a7346/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7a6d0a94f3e52abf75f98ca4ad59837768bc78ac8576764715d5c8ae531a7346/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-20220602173558-283122",
	                "Source": "/var/lib/docker/volumes/multinode-20220602173558-283122/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-20220602173558-283122",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-20220602173558-283122",
	                "name.minikube.sigs.k8s.io": "multinode-20220602173558-283122",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bd2760c40398b26c7be4b20c43947c0aeb2d1a01a36b100eaad470b4a325cc7f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49517"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49516"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49513"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49515"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49514"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/bd2760c40398",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-20220602173558-283122": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "96c9bda474fd",
	                        "multinode-20220602173558-283122"
	                    ],
	                    "NetworkID": "e61542ad8e34e7ff09d21a0a5fccca185a192533dcb4a9237f61a24204efb552",
	                    "EndpointID": "9d7d50fd4febb23cbd280da5982ab665e982dee27604ea1e794d9acf3c76af9b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-20220602173558-283122 -n multinode-20220602173558-283122
helpers_test.go:244: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220602173558-283122 logs -n 25: (1.235242979s)
helpers_test.go:252: TestMultiNode/serial/DeployApp2Nodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |               Profile               |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p                                                | custom-subnet-20220602173409-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:34 UTC | 02 Jun 22 17:34 UTC |
	|         | custom-subnet-20220602173409-283122               |                                     |         |                |                     |                     |
	| start   | -p first-20220602173437-283122                    | first-20220602173437-283122         | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:34 UTC | 02 Jun 22 17:35 UTC |
	|         | --driver=docker                                   |                                     |         |                |                     |                     |
	|         | --container-runtime=docker                        |                                     |         |                |                     |                     |
	| start   | -p                                                | second-20220602173437-283122        | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	|         | second-20220602173437-283122                      |                                     |         |                |                     |                     |
	|         | --driver=docker                                   |                                     |         |                |                     |                     |
	|         | --container-runtime=docker                        |                                     |         |                |                     |                     |
	| profile | first-20220602173437-283122                       | minikube                            | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	| profile | list -ojson                                       | first-20220602173437-283122         | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	| profile | second-20220602173437-283122                      | first-20220602173437-283122         | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	| profile | list -ojson                                       | second-20220602173437-283122        | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	| delete  | -p                                                | second-20220602173437-283122        | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	|         | second-20220602173437-283122                      |                                     |         |                |                     |                     |
	| delete  | -p first-20220602173437-283122                    | first-20220602173437-283122         | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	| start   | -p                                                | mount-start-1-20220602173533-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	|         | mount-start-1-20220602173533-283122               |                                     |         |                |                     |                     |
	|         | --memory=2048 --mount                             |                                     |         |                |                     |                     |
	|         | --mount-gid 0 --mount-msize 6543                  |                                     |         |                |                     |                     |
	|         | --mount-port 46464 --mount-uid 0                  |                                     |         |                |                     |                     |
	|         | --no-kubernetes --driver=docker                   |                                     |         |                |                     |                     |
	|         | --container-runtime=docker                        |                                     |         |                |                     |                     |
	| ssh     | mount-start-1-20220602173533-283122               | mount-start-1-20220602173533-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	|         | ssh -- ls /minikube-host                          |                                     |         |                |                     |                     |
	| start   | -p                                                | mount-start-2-20220602173533-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	|         | mount-start-2-20220602173533-283122               |                                     |         |                |                     |                     |
	|         | --memory=2048 --mount                             |                                     |         |                |                     |                     |
	|         | --mount-gid 0 --mount-msize 6543                  |                                     |         |                |                     |                     |
	|         | --mount-port 46465 --mount-uid 0                  |                                     |         |                |                     |                     |
	|         | --no-kubernetes --driver=docker                   |                                     |         |                |                     |                     |
	|         | --container-runtime=docker                        |                                     |         |                |                     |                     |
	| ssh     | mount-start-2-20220602173533-283122               | mount-start-2-20220602173533-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	|         | ssh -- ls /minikube-host                          |                                     |         |                |                     |                     |
	| delete  | -p                                                | mount-start-1-20220602173533-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	|         | mount-start-1-20220602173533-283122               |                                     |         |                |                     |                     |
	|         | --alsologtostderr -v=5                            |                                     |         |                |                     |                     |
	| ssh     | mount-start-2-20220602173533-283122               | mount-start-2-20220602173533-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	|         | ssh -- ls /minikube-host                          |                                     |         |                |                     |                     |
	| stop    | -p                                                | mount-start-2-20220602173533-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	|         | mount-start-2-20220602173533-283122               |                                     |         |                |                     |                     |
	| start   | -p                                                | mount-start-2-20220602173533-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	|         | mount-start-2-20220602173533-283122               |                                     |         |                |                     |                     |
	| ssh     | mount-start-2-20220602173533-283122               | mount-start-2-20220602173533-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	|         | ssh -- ls /minikube-host                          |                                     |         |                |                     |                     |
	| delete  | -p                                                | mount-start-2-20220602173533-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	|         | mount-start-2-20220602173533-283122               |                                     |         |                |                     |                     |
	| delete  | -p                                                | mount-start-1-20220602173533-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	|         | mount-start-1-20220602173533-283122               |                                     |         |                |                     |                     |
	| start   | -p                                                | multinode-20220602173558-283122     | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:37 UTC |
	|         | multinode-20220602173558-283122                   |                                     |         |                |                     |                     |
	|         | --wait=true --memory=2200                         |                                     |         |                |                     |                     |
	|         | --nodes=2 -v=8                                    |                                     |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                     |         |                |                     |                     |
	|         | --driver=docker                                   |                                     |         |                |                     |                     |
	|         | --container-runtime=docker                        |                                     |         |                |                     |                     |
	| kubectl | -p multinode-20220602173558-283122 -- apply -f    | multinode-20220602173558-283122     | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 UTC | 02 Jun 22 17:37 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                                     |         |                |                     |                     |
	| kubectl | -p                                                | multinode-20220602173558-283122     | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 UTC | 02 Jun 22 17:37 UTC |
	|         | multinode-20220602173558-283122                   |                                     |         |                |                     |                     |
	|         | -- rollout status                                 |                                     |         |                |                     |                     |
	|         | deployment/busybox                                |                                     |         |                |                     |                     |
	| kubectl | -p multinode-20220602173558-283122                | multinode-20220602173558-283122     | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 UTC | 02 Jun 22 17:37 UTC |
	|         | -- get pods -o                                    |                                     |         |                |                     |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                                     |         |                |                     |                     |
	| kubectl | -p multinode-20220602173558-283122                | multinode-20220602173558-283122     | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 UTC | 02 Jun 22 17:37 UTC |
	|         | -- get pods -o                                    |                                     |         |                |                     |                     |
	|         | jsonpath='{.items[*].metadata.name}'              |                                     |         |                |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 17:35:58
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 17:35:58.233087  374798 out.go:296] Setting OutFile to fd 1 ...
	I0602 17:35:58.233233  374798 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:35:58.233244  374798 out.go:309] Setting ErrFile to fd 2...
	I0602 17:35:58.233248  374798 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:35:58.233368  374798 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 17:35:58.233666  374798 out.go:303] Setting JSON to false
	I0602 17:35:58.235411  374798 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8312,"bootTime":1654183047,"procs":1232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0602 17:35:58.235494  374798 start.go:125] virtualization: kvm guest
	I0602 17:35:58.238406  374798 out.go:177] * [multinode-20220602173558-283122] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0602 17:35:58.240054  374798 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 17:35:58.239963  374798 notify.go:193] Checking for updates...
	I0602 17:35:58.241681  374798 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 17:35:58.243471  374798 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 17:35:58.244929  374798 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 17:35:58.246343  374798 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0602 17:35:58.248083  374798 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 17:35:58.289688  374798 docker.go:137] docker version: linux-20.10.16
	I0602 17:35:58.289836  374798 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 17:35:58.394930  374798 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:71 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:34 SystemTime:2022-06-02 17:35:58.319122951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 17:35:58.395049  374798 docker.go:254] overlay module found
	I0602 17:35:58.397296  374798 out.go:177] * Using the docker driver based on user configuration
	I0602 17:35:58.398781  374798 start.go:284] selected driver: docker
	I0602 17:35:58.398802  374798 start.go:806] validating driver "docker" against <nil>
	I0602 17:35:58.398826  374798 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 17:35:58.399679  374798 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 17:35:58.502623  374798 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:71 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:34 SystemTime:2022-06-02 17:35:58.427707256 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 17:35:58.502766  374798 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0602 17:35:58.502961  374798 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0602 17:35:58.505354  374798 out.go:177] * Using Docker driver with the root privilege
	I0602 17:35:58.506734  374798 cni.go:95] Creating CNI manager for ""
	I0602 17:35:58.506759  374798 cni.go:156] 0 nodes found, recommending kindnet
	I0602 17:35:58.506797  374798 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0602 17:35:58.506810  374798 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0602 17:35:58.506815  374798 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0602 17:35:58.506842  374798 start_flags.go:306] config:
	{Name:multinode-20220602173558-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220602173558-283122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 17:35:58.508595  374798 out.go:177] * Starting control plane node multinode-20220602173558-283122 in cluster multinode-20220602173558-283122
	I0602 17:35:58.509982  374798 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 17:35:58.511418  374798 out.go:177] * Pulling base image ...
	I0602 17:35:58.512742  374798 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 17:35:58.512796  374798 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 17:35:58.512813  374798 cache.go:57] Caching tarball of preloaded images
	I0602 17:35:58.512833  374798 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 17:35:58.513068  374798 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 17:35:58.513087  374798 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 17:35:58.513431  374798 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/config.json ...
	I0602 17:35:58.513462  374798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/config.json: {Name:mkcac6351aa666483dc218cf03023a9ea6d2bae0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:35:58.561417  374798 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 17:35:58.561457  374798 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 17:35:58.561475  374798 cache.go:206] Successfully downloaded all kic artifacts
	I0602 17:35:58.561527  374798 start.go:352] acquiring machines lock for multinode-20220602173558-283122: {Name:mkd1d7ce0a0491c5601a577f4da4ed2fb2774cda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 17:35:58.561685  374798 start.go:356] acquired machines lock for "multinode-20220602173558-283122" in 133.214µs
	I0602 17:35:58.561726  374798 start.go:91] Provisioning new machine with config: &{Name:multinode-20220602173558-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220602173558-283122 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 17:35:58.561814  374798 start.go:131] createHost starting for "" (driver="docker")
	I0602 17:35:58.565214  374798 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0602 17:35:58.565465  374798 start.go:165] libmachine.API.Create for "multinode-20220602173558-283122" (driver="docker")
	I0602 17:35:58.565501  374798 client.go:168] LocalClient.Create starting
	I0602 17:35:58.565568  374798 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem
	I0602 17:35:58.565599  374798 main.go:134] libmachine: Decoding PEM data...
	I0602 17:35:58.565618  374798 main.go:134] libmachine: Parsing certificate...
	I0602 17:35:58.565673  374798 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem
	I0602 17:35:58.565695  374798 main.go:134] libmachine: Decoding PEM data...
	I0602 17:35:58.565707  374798 main.go:134] libmachine: Parsing certificate...
	I0602 17:35:58.566006  374798 cli_runner.go:164] Run: docker network inspect multinode-20220602173558-283122 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0602 17:35:58.598357  374798 cli_runner.go:211] docker network inspect multinode-20220602173558-283122 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0602 17:35:58.598454  374798 network_create.go:272] running [docker network inspect multinode-20220602173558-283122] to gather additional debugging logs...
	I0602 17:35:58.598484  374798 cli_runner.go:164] Run: docker network inspect multinode-20220602173558-283122
	W0602 17:35:58.628937  374798 cli_runner.go:211] docker network inspect multinode-20220602173558-283122 returned with exit code 1
	I0602 17:35:58.628974  374798 network_create.go:275] error running [docker network inspect multinode-20220602173558-283122]: docker network inspect multinode-20220602173558-283122: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220602173558-283122
	I0602 17:35:58.628992  374798 network_create.go:277] output of [docker network inspect multinode-20220602173558-283122]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220602173558-283122
	
	** /stderr **
	I0602 17:35:58.629069  374798 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 17:35:58.661197  374798 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000484100] misses:0}
	I0602 17:35:58.661260  374798 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 17:35:58.661280  374798 network_create.go:115] attempt to create docker network multinode-20220602173558-283122 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0602 17:35:58.661328  374798 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220602173558-283122
	I0602 17:35:58.728468  374798 network_create.go:99] docker network multinode-20220602173558-283122 192.168.49.0/24 created
	I0602 17:35:58.728506  374798 kic.go:106] calculated static IP "192.168.49.2" for the "multinode-20220602173558-283122" container
	I0602 17:35:58.728580  374798 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0602 17:35:58.759443  374798 cli_runner.go:164] Run: docker volume create multinode-20220602173558-283122 --label name.minikube.sigs.k8s.io=multinode-20220602173558-283122 --label created_by.minikube.sigs.k8s.io=true
	I0602 17:35:58.793059  374798 oci.go:103] Successfully created a docker volume multinode-20220602173558-283122
	I0602 17:35:58.793149  374798 cli_runner.go:164] Run: docker run --rm --name multinode-20220602173558-283122-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20220602173558-283122 --entrypoint /usr/bin/test -v multinode-20220602173558-283122:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib
	I0602 17:35:59.370089  374798 oci.go:107] Successfully prepared a docker volume multinode-20220602173558-283122
	I0602 17:35:59.370233  374798 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 17:35:59.370260  374798 kic.go:179] Starting extracting preloaded images to volume ...
	I0602 17:35:59.370328  374798 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20220602173558-283122:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir
	I0602 17:36:06.069247  374798 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20220602173558-283122:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir: (6.698834574s)
	I0602 17:36:06.069294  374798 kic.go:188] duration metric: took 6.699023 seconds to extract preloaded images to volume
	W0602 17:36:06.069440  374798 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0602 17:36:06.069559  374798 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0602 17:36:06.174267  374798 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-20220602173558-283122 --name multinode-20220602173558-283122 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20220602173558-283122 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-20220602173558-283122 --network multinode-20220602173558-283122 --ip 192.168.49.2 --volume multinode-20220602173558-283122:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496
	I0602 17:36:06.567220  374798 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122 --format={{.State.Running}}
	I0602 17:36:06.602477  374798 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122 --format={{.State.Status}}
	I0602 17:36:06.634965  374798 cli_runner.go:164] Run: docker exec multinode-20220602173558-283122 stat /var/lib/dpkg/alternatives/iptables
	I0602 17:36:06.696098  374798 oci.go:247] the created container "multinode-20220602173558-283122" has a running status.
	I0602 17:36:06.696130  374798 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122/id_rsa...
	I0602 17:36:06.762806  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0602 17:36:06.762854  374798 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0602 17:36:06.850067  374798 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122 --format={{.State.Status}}
	I0602 17:36:06.883321  374798 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0602 17:36:06.883351  374798 kic_runner.go:114] Args: [docker exec --privileged multinode-20220602173558-283122 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0602 17:36:06.976450  374798 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122 --format={{.State.Status}}
	I0602 17:36:07.010291  374798 machine.go:88] provisioning docker machine ...
	I0602 17:36:07.010342  374798 ubuntu.go:169] provisioning hostname "multinode-20220602173558-283122"
	I0602 17:36:07.010419  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:07.045968  374798 main.go:134] libmachine: Using SSH client type: native
	I0602 17:36:07.046178  374798 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49517 <nil> <nil>}
	I0602 17:36:07.046202  374798 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-20220602173558-283122 && echo "multinode-20220602173558-283122" | sudo tee /etc/hostname
	I0602 17:36:07.169951  374798 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-20220602173558-283122
	
	I0602 17:36:07.170059  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:07.203134  374798 main.go:134] libmachine: Using SSH client type: native
	I0602 17:36:07.203290  374798 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49517 <nil> <nil>}
	I0602 17:36:07.203312  374798 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20220602173558-283122' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20220602173558-283122/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20220602173558-283122' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 17:36:07.316909  374798 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 17:36:07.316944  374798 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem
ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 17:36:07.316975  374798 ubuntu.go:177] setting up certificates
	I0602 17:36:07.316989  374798 provision.go:83] configureAuth start
	I0602 17:36:07.317076  374798 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220602173558-283122
	I0602 17:36:07.348231  374798 provision.go:138] copyHostCerts
	I0602 17:36:07.348271  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 17:36:07.348326  374798 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
	I0602 17:36:07.348344  374798 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 17:36:07.348404  374798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 17:36:07.348481  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 17:36:07.348502  374798 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
	I0602 17:36:07.348513  374798 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 17:36:07.348540  374798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 17:36:07.348591  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 17:36:07.348605  374798 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
	I0602 17:36:07.348609  374798 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 17:36:07.348631  374798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1679 bytes)
	I0602 17:36:07.348682  374798 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.multinode-20220602173558-283122 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-20220602173558-283122]
	I0602 17:36:07.515688  374798 provision.go:172] copyRemoteCerts
	I0602 17:36:07.515763  374798 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 17:36:07.515799  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:07.547011  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49517 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122/id_rsa Username:docker}
	I0602 17:36:07.636708  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0602 17:36:07.636785  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 17:36:07.654835  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0602 17:36:07.654898  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0602 17:36:07.672470  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0602 17:36:07.672542  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0602 17:36:07.690031  374798 provision.go:86] duration metric: configureAuth took 373.018186ms
	I0602 17:36:07.690062  374798 ubuntu.go:193] setting minikube options for container-runtime
	I0602 17:36:07.690288  374798 config.go:178] Loaded profile config "multinode-20220602173558-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 17:36:07.690352  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:07.722440  374798 main.go:134] libmachine: Using SSH client type: native
	I0602 17:36:07.722619  374798 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49517 <nil> <nil>}
	I0602 17:36:07.722640  374798 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 17:36:07.841371  374798 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 17:36:07.841406  374798 ubuntu.go:71] root file system type: overlay
	I0602 17:36:07.841587  374798 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 17:36:07.841662  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:07.872720  374798 main.go:134] libmachine: Using SSH client type: native
	I0602 17:36:07.872875  374798 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49517 <nil> <nil>}
	I0602 17:36:07.872935  374798 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 17:36:07.997841  374798 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 17:36:07.997928  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:08.029422  374798 main.go:134] libmachine: Using SSH client type: native
	I0602 17:36:08.029581  374798 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49517 <nil> <nil>}
	I0602 17:36:08.029601  374798 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 17:36:08.671832  374798 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-02 17:36:07.991668796 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0602 17:36:08.671863  374798 machine.go:91] provisioned docker machine in 1.661540122s
	I0602 17:36:08.671875  374798 client.go:171] LocalClient.Create took 10.106368546s
	I0602 17:36:08.671895  374798 start.go:173] duration metric: libmachine.API.Create for "multinode-20220602173558-283122" took 10.106423566s
	I0602 17:36:08.671903  374798 start.go:306] post-start starting for "multinode-20220602173558-283122" (driver="docker")
	I0602 17:36:08.671909  374798 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 17:36:08.671976  374798 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 17:36:08.672066  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:08.703241  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49517 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122/id_rsa Username:docker}
	I0602 17:36:08.788744  374798 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 17:36:08.791529  374798 command_runner.go:130] > NAME="Ubuntu"
	I0602 17:36:08.791554  374798 command_runner.go:130] > VERSION="20.04.4 LTS (Focal Fossa)"
	I0602 17:36:08.791561  374798 command_runner.go:130] > ID=ubuntu
	I0602 17:36:08.791569  374798 command_runner.go:130] > ID_LIKE=debian
	I0602 17:36:08.791576  374798 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.4 LTS"
	I0602 17:36:08.791583  374798 command_runner.go:130] > VERSION_ID="20.04"
	I0602 17:36:08.791592  374798 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0602 17:36:08.791600  374798 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0602 17:36:08.791607  374798 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0602 17:36:08.791617  374798 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0602 17:36:08.791621  374798 command_runner.go:130] > VERSION_CODENAME=focal
	I0602 17:36:08.791627  374798 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0602 17:36:08.791702  374798 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 17:36:08.791719  374798 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 17:36:08.791733  374798 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 17:36:08.791740  374798 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 17:36:08.791753  374798 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 17:36:08.791819  374798 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 17:36:08.791883  374798 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem -> 2831222.pem in /etc/ssl/certs
	I0602 17:36:08.791898  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem -> /etc/ssl/certs/2831222.pem
	I0602 17:36:08.791968  374798 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 17:36:08.799118  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem --> /etc/ssl/certs/2831222.pem (1708 bytes)
	I0602 17:36:08.816845  374798 start.go:309] post-start completed in 144.926263ms
	I0602 17:36:08.817254  374798 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220602173558-283122
	I0602 17:36:08.848069  374798 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/config.json ...
	I0602 17:36:08.848322  374798 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 17:36:08.848366  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:08.879030  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49517 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122/id_rsa Username:docker}
	I0602 17:36:08.961953  374798 command_runner.go:130] > 22%!
	(MISSING)I0602 17:36:08.962040  374798 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 17:36:08.965860  374798 command_runner.go:130] > 229G
	I0602 17:36:08.966032  374798 start.go:134] duration metric: createHost completed in 10.404205446s
	I0602 17:36:08.966053  374798 start.go:81] releasing machines lock for "multinode-20220602173558-283122", held for 10.40435218s
	I0602 17:36:08.966138  374798 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220602173558-283122
	I0602 17:36:08.998274  374798 ssh_runner.go:195] Run: systemctl --version
	I0602 17:36:08.998341  374798 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 17:36:08.998397  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:08.998347  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:09.033185  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49517 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122/id_rsa Username:docker}
	I0602 17:36:09.034883  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49517 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122/id_rsa Username:docker}
	I0602 17:36:09.117353  374798 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.17)
	I0602 17:36:09.117400  374798 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0602 17:36:09.117496  374798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 17:36:09.135989  374798 command_runner.go:130] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0602 17:36:09.136018  374798 command_runner.go:130] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0602 17:36:09.136024  374798 command_runner.go:130] > <H1>302 Moved</H1>
	I0602 17:36:09.136028  374798 command_runner.go:130] > The document has moved
	I0602 17:36:09.136033  374798 command_runner.go:130] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0602 17:36:09.136038  374798 command_runner.go:130] > </BODY></HTML>
	I0602 17:36:09.136166  374798 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 17:36:09.144977  374798 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0602 17:36:09.145177  374798 command_runner.go:130] > [Unit]
	I0602 17:36:09.145207  374798 command_runner.go:130] > Description=Docker Application Container Engine
	I0602 17:36:09.145221  374798 command_runner.go:130] > Documentation=https://docs.docker.com
	I0602 17:36:09.145228  374798 command_runner.go:130] > BindsTo=containerd.service
	I0602 17:36:09.145237  374798 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0602 17:36:09.145243  374798 command_runner.go:130] > Wants=network-online.target
	I0602 17:36:09.145269  374798 command_runner.go:130] > Requires=docker.socket
	I0602 17:36:09.145280  374798 command_runner.go:130] > StartLimitBurst=3
	I0602 17:36:09.145286  374798 command_runner.go:130] > StartLimitIntervalSec=60
	I0602 17:36:09.145295  374798 command_runner.go:130] > [Service]
	I0602 17:36:09.145300  374798 command_runner.go:130] > Type=notify
	I0602 17:36:09.145306  374798 command_runner.go:130] > Restart=on-failure
	I0602 17:36:09.145321  374798 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0602 17:36:09.145339  374798 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0602 17:36:09.145354  374798 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0602 17:36:09.145371  374798 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0602 17:36:09.145381  374798 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0602 17:36:09.145388  374798 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0602 17:36:09.145395  374798 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0602 17:36:09.145403  374798 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0602 17:36:09.145414  374798 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0602 17:36:09.145422  374798 command_runner.go:130] > ExecStart=
	I0602 17:36:09.145436  374798 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0602 17:36:09.145446  374798 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0602 17:36:09.145457  374798 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0602 17:36:09.145467  374798 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0602 17:36:09.145472  374798 command_runner.go:130] > LimitNOFILE=infinity
	I0602 17:36:09.145477  374798 command_runner.go:130] > LimitNPROC=infinity
	I0602 17:36:09.145481  374798 command_runner.go:130] > LimitCORE=infinity
	I0602 17:36:09.145490  374798 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0602 17:36:09.145495  374798 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0602 17:36:09.145498  374798 command_runner.go:130] > TasksMax=infinity
	I0602 17:36:09.145502  374798 command_runner.go:130] > TimeoutStartSec=0
	I0602 17:36:09.145508  374798 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0602 17:36:09.145515  374798 command_runner.go:130] > Delegate=yes
	I0602 17:36:09.145522  374798 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0602 17:36:09.145529  374798 command_runner.go:130] > KillMode=process
	I0602 17:36:09.145534  374798 command_runner.go:130] > [Install]
	I0602 17:36:09.145541  374798 command_runner.go:130] > WantedBy=multi-user.target
	I0602 17:36:09.145865  374798 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 17:36:09.145922  374798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 17:36:09.155138  374798 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 17:36:09.167940  374798 command_runner.go:130] > runtime-endpoint: unix:///var/run/dockershim.sock
	I0602 17:36:09.167968  374798 command_runner.go:130] > image-endpoint: unix:///var/run/dockershim.sock
	I0602 17:36:09.168031  374798 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 17:36:09.244950  374798 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 17:36:09.320702  374798 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 17:36:09.329629  374798 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0602 17:36:09.329656  374798 command_runner.go:130] > [Unit]
	I0602 17:36:09.329672  374798 command_runner.go:130] > Description=Docker Application Container Engine
	I0602 17:36:09.329692  374798 command_runner.go:130] > Documentation=https://docs.docker.com
	I0602 17:36:09.329701  374798 command_runner.go:130] > BindsTo=containerd.service
	I0602 17:36:09.329711  374798 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0602 17:36:09.329722  374798 command_runner.go:130] > Wants=network-online.target
	I0602 17:36:09.329734  374798 command_runner.go:130] > Requires=docker.socket
	I0602 17:36:09.329748  374798 command_runner.go:130] > StartLimitBurst=3
	I0602 17:36:09.329760  374798 command_runner.go:130] > StartLimitIntervalSec=60
	I0602 17:36:09.329766  374798 command_runner.go:130] > [Service]
	I0602 17:36:09.329776  374798 command_runner.go:130] > Type=notify
	I0602 17:36:09.329783  374798 command_runner.go:130] > Restart=on-failure
	I0602 17:36:09.329801  374798 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0602 17:36:09.329817  374798 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0602 17:36:09.329831  374798 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0602 17:36:09.329846  374798 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0602 17:36:09.329860  374798 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0602 17:36:09.329875  374798 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0602 17:36:09.329891  374798 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0602 17:36:09.329907  374798 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0602 17:36:09.329917  374798 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0602 17:36:09.329927  374798 command_runner.go:130] > ExecStart=
	I0602 17:36:09.329949  374798 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0602 17:36:09.329962  374798 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0602 17:36:09.329977  374798 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0602 17:36:09.329991  374798 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0602 17:36:09.330001  374798 command_runner.go:130] > LimitNOFILE=infinity
	I0602 17:36:09.330006  374798 command_runner.go:130] > LimitNPROC=infinity
	I0602 17:36:09.330013  374798 command_runner.go:130] > LimitCORE=infinity
	I0602 17:36:09.330028  374798 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0602 17:36:09.330051  374798 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0602 17:36:09.330062  374798 command_runner.go:130] > TasksMax=infinity
	I0602 17:36:09.330071  374798 command_runner.go:130] > TimeoutStartSec=0
	I0602 17:36:09.330078  374798 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0602 17:36:09.330086  374798 command_runner.go:130] > Delegate=yes
	I0602 17:36:09.330091  374798 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0602 17:36:09.330098  374798 command_runner.go:130] > KillMode=process
	I0602 17:36:09.330102  374798 command_runner.go:130] > [Install]
	I0602 17:36:09.330109  374798 command_runner.go:130] > WantedBy=multi-user.target
	I0602 17:36:09.330447  374798 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 17:36:09.408708  374798 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 17:36:09.418325  374798 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 17:36:09.454791  374798 command_runner.go:130] > 20.10.16
	I0602 17:36:09.457132  374798 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 17:36:09.494008  374798 command_runner.go:130] > 20.10.16
	I0602 17:36:09.499102  374798 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0602 17:36:09.499192  374798 cli_runner.go:164] Run: docker network inspect multinode-20220602173558-283122 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 17:36:09.530539  374798 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0602 17:36:09.533912  374798 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 17:36:09.545513  374798 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0602 17:36:09.546970  374798 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 17:36:09.547036  374798 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 17:36:09.577182  374798 command_runner.go:130] > k8s.gcr.io/kube-apiserver:v1.23.6
	I0602 17:36:09.577218  374798 command_runner.go:130] > k8s.gcr.io/kube-proxy:v1.23.6
	I0602 17:36:09.577227  374798 command_runner.go:130] > k8s.gcr.io/kube-scheduler:v1.23.6
	I0602 17:36:09.577235  374798 command_runner.go:130] > k8s.gcr.io/kube-controller-manager:v1.23.6
	I0602 17:36:09.577249  374798 command_runner.go:130] > k8s.gcr.io/etcd:3.5.1-0
	I0602 17:36:09.577255  374798 command_runner.go:130] > k8s.gcr.io/coredns/coredns:v1.8.6
	I0602 17:36:09.577260  374798 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0602 17:36:09.577268  374798 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 17:36:09.579434  374798 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0602 17:36:09.579455  374798 docker.go:541] Images already preloaded, skipping extraction
	I0602 17:36:09.579504  374798 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 17:36:09.609223  374798 command_runner.go:130] > k8s.gcr.io/kube-apiserver:v1.23.6
	I0602 17:36:09.609252  374798 command_runner.go:130] > k8s.gcr.io/kube-scheduler:v1.23.6
	I0602 17:36:09.609260  374798 command_runner.go:130] > k8s.gcr.io/kube-controller-manager:v1.23.6
	I0602 17:36:09.609267  374798 command_runner.go:130] > k8s.gcr.io/kube-proxy:v1.23.6
	I0602 17:36:09.609274  374798 command_runner.go:130] > k8s.gcr.io/etcd:3.5.1-0
	I0602 17:36:09.609282  374798 command_runner.go:130] > k8s.gcr.io/coredns/coredns:v1.8.6
	I0602 17:36:09.609290  374798 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0602 17:36:09.609301  374798 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 17:36:09.611486  374798 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0602 17:36:09.611510  374798 cache_images.go:84] Images are preloaded, skipping loading
	I0602 17:36:09.611578  374798 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 17:36:09.692543  374798 command_runner.go:130] > cgroupfs
	I0602 17:36:09.692650  374798 cni.go:95] Creating CNI manager for ""
	I0602 17:36:09.692665  374798 cni.go:156] 1 nodes found, recommending kindnet
	I0602 17:36:09.692683  374798 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 17:36:09.692701  374798 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20220602173558-283122 NodeName:multinode-20220602173558-283122 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/
var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 17:36:09.692834  374798 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "multinode-20220602173558-283122"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 17:36:09.692918  374798 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=multinode-20220602173558-283122 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:multinode-20220602173558-283122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0602 17:36:09.692969  374798 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0602 17:36:09.699586  374798 command_runner.go:130] > kubeadm
	I0602 17:36:09.699618  374798 command_runner.go:130] > kubectl
	I0602 17:36:09.699624  374798 command_runner.go:130] > kubelet
	I0602 17:36:09.700185  374798 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 17:36:09.700243  374798 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 17:36:09.707281  374798 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (409 bytes)
	I0602 17:36:09.720168  374798 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 17:36:09.733002  374798 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2053 bytes)
	I0602 17:36:09.746161  374798 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0602 17:36:09.749231  374798 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 17:36:09.759164  374798 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122 for IP: 192.168.49.2
	I0602 17:36:09.759279  374798 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
	I0602 17:36:09.759313  374798 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
	I0602 17:36:09.759361  374798 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/client.key
	I0602 17:36:09.759385  374798 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/client.crt with IP's: []
	I0602 17:36:10.033743  374798 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/client.crt ...
	I0602 17:36:10.033783  374798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/client.crt: {Name:mk576b2d7ed9c2c793890ead1e9c37d12768cad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:36:10.033999  374798 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/client.key ...
	I0602 17:36:10.034012  374798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/client.key: {Name:mkdfbc7402d3976d73540149469e1b639252abb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:36:10.034099  374798 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.key.dd3b5fb2
	I0602 17:36:10.034115  374798 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0602 17:36:10.482422  374798 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.crt.dd3b5fb2 ...
	I0602 17:36:10.482468  374798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.crt.dd3b5fb2: {Name:mk187e82cb77f4156767f1c3963ab9622ab60a4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:36:10.482683  374798 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.key.dd3b5fb2 ...
	I0602 17:36:10.482698  374798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.key.dd3b5fb2: {Name:mk9043639c68888e46623f7542ad65b8c2cc6cb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:36:10.482790  374798 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.crt
	I0602 17:36:10.482848  374798 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.key
	I0602 17:36:10.482885  374798 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/proxy-client.key
	I0602 17:36:10.482899  374798 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/proxy-client.crt with IP's: []
	I0602 17:36:10.687605  374798 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/proxy-client.crt ...
	I0602 17:36:10.687642  374798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/proxy-client.crt: {Name:mk065ba65d3690fadd68a80e0a5ee1cc58e053c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:36:10.687873  374798 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/proxy-client.key ...
	I0602 17:36:10.687887  374798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/proxy-client.key: {Name:mkeb52e75532542ce9664b941ff32197fdada6fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:36:10.687998  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0602 17:36:10.688018  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0602 17:36:10.688027  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0602 17:36:10.688037  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0602 17:36:10.688052  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0602 17:36:10.688065  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0602 17:36:10.688079  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0602 17:36:10.688090  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0602 17:36:10.688141  374798 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/283122.pem (1338 bytes)
	W0602 17:36:10.688180  374798 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/283122_empty.pem, impossibly tiny 0 bytes
	I0602 17:36:10.688580  374798 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
	I0602 17:36:10.688644  374798 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
	I0602 17:36:10.688675  374798 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
	I0602 17:36:10.688703  374798 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1679 bytes)
	I0602 17:36:10.688769  374798 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem (1708 bytes)
	I0602 17:36:10.688811  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem -> /usr/share/ca-certificates/2831222.pem
	I0602 17:36:10.688829  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:36:10.688840  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/283122.pem -> /usr/share/ca-certificates/283122.pem
	I0602 17:36:10.690082  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 17:36:10.708457  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0602 17:36:10.725998  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 17:36:10.744712  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0602 17:36:10.763188  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 17:36:10.781831  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0602 17:36:10.799479  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 17:36:10.817089  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0602 17:36:10.835410  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem --> /usr/share/ca-certificates/2831222.pem (1708 bytes)
	I0602 17:36:10.853247  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 17:36:10.870341  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/283122.pem --> /usr/share/ca-certificates/283122.pem (1338 bytes)
	I0602 17:36:10.887232  374798 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 17:36:10.899749  374798 ssh_runner.go:195] Run: openssl version
	I0602 17:36:10.904285  374798 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0602 17:36:10.904480  374798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 17:36:10.911777  374798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:36:10.915034  374798 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:36:10.915078  374798 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:36:10.915128  374798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:36:10.920005  374798 command_runner.go:130] > b5213941
	I0602 17:36:10.920167  374798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 17:36:10.927467  374798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/283122.pem && ln -fs /usr/share/ca-certificates/283122.pem /etc/ssl/certs/283122.pem"
	I0602 17:36:10.934485  374798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/283122.pem
	I0602 17:36:10.937355  374798 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  2 17:19 /usr/share/ca-certificates/283122.pem
	I0602 17:36:10.937491  374798 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:19 /usr/share/ca-certificates/283122.pem
	I0602 17:36:10.937547  374798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/283122.pem
	I0602 17:36:10.942142  374798 command_runner.go:130] > 51391683
	I0602 17:36:10.942313  374798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/283122.pem /etc/ssl/certs/51391683.0"
	I0602 17:36:10.950061  374798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2831222.pem && ln -fs /usr/share/ca-certificates/2831222.pem /etc/ssl/certs/2831222.pem"
	I0602 17:36:10.958171  374798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2831222.pem
	I0602 17:36:10.961323  374798 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  2 17:19 /usr/share/ca-certificates/2831222.pem
	I0602 17:36:10.961410  374798 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:19 /usr/share/ca-certificates/2831222.pem
	I0602 17:36:10.961462  374798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2831222.pem
	I0602 17:36:10.966087  374798 command_runner.go:130] > 3ec20f2e
	I0602 17:36:10.966332  374798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2831222.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 17:36:10.973650  374798 kubeadm.go:395] StartCluster: {Name:multinode-20220602173558-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220602173558-283122 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 17:36:10.973792  374798 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 17:36:11.005862  374798 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 17:36:11.013407  374798 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0602 17:36:11.013446  374798 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0602 17:36:11.013452  374798 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0602 17:36:11.013515  374798 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 17:36:11.020663  374798 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 17:36:11.020732  374798 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 17:36:11.027497  374798 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0602 17:36:11.027529  374798 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0602 17:36:11.027537  374798 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0602 17:36:11.027545  374798 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 17:36:11.027583  374798 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 17:36:11.027621  374798 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 17:36:11.069588  374798 command_runner.go:130] > [init] Using Kubernetes version: v1.23.6
	I0602 17:36:11.069693  374798 command_runner.go:130] > [preflight] Running pre-flight checks
	I0602 17:36:11.250971  374798 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0602 17:36:11.251059  374798 command_runner.go:130] > KERNEL_VERSION: 5.13.0-1027-gcp
	I0602 17:36:11.251108  374798 command_runner.go:130] > DOCKER_VERSION: 20.10.16
	I0602 17:36:11.251159  374798 command_runner.go:130] > DOCKER_GRAPH_DRIVER: overlay2
	I0602 17:36:11.251205  374798 command_runner.go:130] > OS: Linux
	I0602 17:36:11.251269  374798 command_runner.go:130] > CGROUPS_CPU: enabled
	I0602 17:36:11.251330  374798 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0602 17:36:11.251434  374798 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0602 17:36:11.251514  374798 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0602 17:36:11.251600  374798 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0602 17:36:11.251666  374798 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0602 17:36:11.251725  374798 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0602 17:36:11.251805  374798 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0602 17:36:11.313777  374798 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0602 17:36:11.313873  374798 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0602 17:36:11.313949  374798 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0602 17:36:11.526849  374798 out.go:204]   - Generating certificates and keys ...
	I0602 17:36:11.523058  374798 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0602 17:36:11.527030  374798 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0602 17:36:11.527112  374798 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0602 17:36:11.637229  374798 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0602 17:36:11.892492  374798 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0602 17:36:12.119451  374798 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0602 17:36:12.265928  374798 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0602 17:36:12.335134  374798 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0602 17:36:12.335340  374798 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-20220602173558-283122] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0602 17:36:12.446667  374798 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0602 17:36:12.446825  374798 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-20220602173558-283122] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0602 17:36:12.592987  374798 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0602 17:36:12.675466  374798 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0602 17:36:12.788976  374798 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0602 17:36:12.789109  374798 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0602 17:36:12.894981  374798 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0602 17:36:13.065487  374798 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0602 17:36:13.222951  374798 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0602 17:36:13.450881  374798 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0602 17:36:13.462448  374798 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0602 17:36:13.462973  374798 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0602 17:36:13.463042  374798 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0602 17:36:13.552757  374798 out.go:204]   - Booting up control plane ...
	I0602 17:36:13.550341  374798 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0602 17:36:13.552889  374798 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0602 17:36:13.553089  374798 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0602 17:36:13.555081  374798 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0602 17:36:13.555860  374798 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0602 17:36:13.557575  374798 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0602 17:36:19.560611  374798 command_runner.go:130] > [apiclient] All control plane components are healthy after 6.003042 seconds
	I0602 17:36:19.560758  374798 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0602 17:36:19.569494  374798 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
	I0602 17:36:19.569761  374798 command_runner.go:130] > NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
	I0602 17:36:20.085297  374798 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0602 17:36:20.085701  374798 command_runner.go:130] > [mark-control-plane] Marking the node multinode-20220602173558-283122 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0602 17:36:20.595890  374798 out.go:204]   - Configuring RBAC rules ...
	I0602 17:36:20.594405  374798 command_runner.go:130] > [bootstrap-token] Using token: ause3q.bz1ngew9hbbt37ig
	I0602 17:36:20.596021  374798 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0602 17:36:20.599210  374798 command_runner.go:130] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0602 17:36:20.603784  374798 command_runner.go:130] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0602 17:36:20.605961  374798 command_runner.go:130] > [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0602 17:36:20.607778  374798 command_runner.go:130] > [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0602 17:36:20.609730  374798 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0602 17:36:20.617631  374798 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0602 17:36:20.790355  374798 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0602 17:36:21.037832  374798 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0602 17:36:21.039000  374798 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0602 17:36:21.039103  374798 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0602 17:36:21.039142  374798 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0602 17:36:21.039213  374798 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0602 17:36:21.039288  374798 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0602 17:36:21.039358  374798 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0602 17:36:21.039421  374798 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0602 17:36:21.039490  374798 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0602 17:36:21.039579  374798 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0602 17:36:21.039665  374798 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0602 17:36:21.039763  374798 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0602 17:36:21.039856  374798 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0602 17:36:21.039966  374798 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token ause3q.bz1ngew9hbbt37ig \
	I0602 17:36:21.040092  374798 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:63ba1911ecf093d3a1264e2d920adc95fcef1f12d9f3ed8ad760b71f9de41674 \
	I0602 17:36:21.040125  374798 command_runner.go:130] > 	--control-plane 
	I0602 17:36:21.040233  374798 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0602 17:36:21.040343  374798 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ause3q.bz1ngew9hbbt37ig \
	I0602 17:36:21.040459  374798 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:63ba1911ecf093d3a1264e2d920adc95fcef1f12d9f3ed8ad760b71f9de41674 
	I0602 17:36:21.042685  374798 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.13.0-1027-gcp\n", err: exit status 1
	I0602 17:36:21.042814  374798 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0602 17:36:21.042843  374798 cni.go:95] Creating CNI manager for ""
	I0602 17:36:21.042861  374798 cni.go:156] 1 nodes found, recommending kindnet
	I0602 17:36:21.045221  374798 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0602 17:36:21.046781  374798 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0602 17:36:21.050754  374798 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0602 17:36:21.050794  374798 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0602 17:36:21.050804  374798 command_runner.go:130] > Device: 34h/52d	Inode: 13679887    Links: 1
	I0602 17:36:21.050814  374798 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0602 17:36:21.050826  374798 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0602 17:36:21.050838  374798 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0602 17:36:21.050847  374798 command_runner.go:130] > Change: 2022-06-01 20:34:52.693415195 +0000
	I0602 17:36:21.050858  374798 command_runner.go:130] >  Birth: -
	I0602 17:36:21.050957  374798 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0602 17:36:21.050974  374798 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0602 17:36:21.068822  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0602 17:36:22.032230  374798 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0602 17:36:22.036626  374798 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0602 17:36:22.042548  374798 command_runner.go:130] > serviceaccount/kindnet created
	I0602 17:36:22.050096  374798 command_runner.go:130] > daemonset.apps/kindnet created
	I0602 17:36:22.054348  374798 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0602 17:36:22.054429  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:22.054438  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae minikube.k8s.io/name=multinode-20220602173558-283122 minikube.k8s.io/updated_at=2022_06_02T17_36_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:22.062033  374798 command_runner.go:130] > -16
	I0602 17:36:22.152157  374798 command_runner.go:130] > node/multinode-20220602173558-283122 labeled
	I0602 17:36:22.152269  374798 ops.go:34] apiserver oom_adj: -16
	I0602 17:36:22.152293  374798 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0602 17:36:22.152353  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:22.203278  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:22.706784  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:22.762148  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:23.206821  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:23.260408  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:23.707075  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:23.761907  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:24.206385  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:24.258425  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:24.706353  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:24.761858  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:25.206482  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:25.260574  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:25.706188  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:25.757216  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:26.206246  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:26.257308  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:26.706456  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:26.760266  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:27.206851  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:27.257894  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:27.707021  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:27.760746  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:28.206434  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:28.262366  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:28.707076  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:28.759529  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:29.206884  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:29.260865  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:29.706423  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:29.760778  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:30.206260  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:30.261684  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:30.706197  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:30.757716  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:31.206990  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:31.262156  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:31.706548  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:31.759500  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:32.206191  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:32.260407  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:32.706859  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:32.763090  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:33.206769  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:33.261987  374798 command_runner.go:130] > NAME      SECRETS   AGE
	I0602 17:36:33.262014  374798 command_runner.go:130] > default   1         1s
	I0602 17:36:33.262047  374798 kubeadm.go:1045] duration metric: took 11.207681132s to wait for elevateKubeSystemPrivileges.
	I0602 17:36:33.262068  374798 kubeadm.go:397] StartCluster complete in 22.288428325s
	I0602 17:36:33.262094  374798 settings.go:142] acquiring lock: {Name:mkca69c8f6bc293fef8b552d09d771e1f2253f12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:36:33.262209  374798 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 17:36:33.262953  374798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk4aad2ea1df51829b8bb57d56bd4d8e58dc96e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:36:33.263510  374798 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 17:36:33.263788  374798 kapi.go:59] client config for multinode-20220602173558-283122: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode
-20220602173558-283122/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17122e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0602 17:36:33.264197  374798 cert_rotation.go:137] Starting client certificate rotation controller
	I0602 17:36:33.264406  374798 round_trippers.go:463] GET https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0602 17:36:33.264422  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:33.264430  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:33.264438  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:33.271839  374798 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0602 17:36:33.271865  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:33.271877  374798 round_trippers.go:580]     Audit-Id: d4e70230-1f99-40e8-8f74-bb3ee0adf3d0
	I0602 17:36:33.271887  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:33.271896  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:33.271904  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:33.271914  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:33.271922  374798 round_trippers.go:580]     Content-Length: 291
	I0602 17:36:33.271928  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:33 GMT
	I0602 17:36:33.271961  374798 request.go:1073] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d6c769a8-f303-42bf-8481-f4eeda576acd","resourceVersion":"288","creationTimestamp":"2022-06-02T17:36:20Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0602 17:36:33.272452  374798 request.go:1073] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d6c769a8-f303-42bf-8481-f4eeda576acd","resourceVersion":"288","creationTimestamp":"2022-06-02T17:36:20Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0602 17:36:33.272512  374798 round_trippers.go:463] PUT https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0602 17:36:33.272524  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:33.272534  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:33.272545  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:33.272561  374798 round_trippers.go:473]     Content-Type: application/json
	I0602 17:36:33.276076  374798 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0602 17:36:33.276101  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:33.276112  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:33.276121  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:33.276130  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:33.276139  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:33.276148  374798 round_trippers.go:580]     Content-Length: 291
	I0602 17:36:33.276159  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:33 GMT
	I0602 17:36:33.276171  374798 round_trippers.go:580]     Audit-Id: 1aaa1c30-716b-4ef3-8ec8-f341c2d6909e
	I0602 17:36:33.276206  374798 request.go:1073] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d6c769a8-f303-42bf-8481-f4eeda576acd","resourceVersion":"405","creationTimestamp":"2022-06-02T17:36:20Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0602 17:36:33.776655  374798 round_trippers.go:463] GET https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0602 17:36:33.776686  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:33.776697  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:33.776706  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:33.781786  374798 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0602 17:36:33.781869  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:33.781887  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:33.781895  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:33.781904  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:33.781913  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:33.781921  374798 round_trippers.go:580]     Content-Length: 291
	I0602 17:36:33.781946  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:33 GMT
	I0602 17:36:33.781958  374798 round_trippers.go:580]     Audit-Id: 2944c589-af12-4bf7-bb00-4720e6024f36
	I0602 17:36:33.782004  374798 request.go:1073] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d6c769a8-f303-42bf-8481-f4eeda576acd","resourceVersion":"419","creationTimestamp":"2022-06-02T17:36:20Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0602 17:36:33.782149  374798 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20220602173558-283122" rescaled to 1
	I0602 17:36:33.782223  374798 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 17:36:33.784370  374798 out.go:177] * Verifying Kubernetes components...
	I0602 17:36:33.782546  374798 config.go:178] Loaded profile config "multinode-20220602173558-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 17:36:33.782608  374798 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0602 17:36:33.782649  374798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0602 17:36:33.786236  374798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 17:36:33.786402  374798 addons.go:65] Setting storage-provisioner=true in profile "multinode-20220602173558-283122"
	I0602 17:36:33.786429  374798 addons.go:153] Setting addon storage-provisioner=true in "multinode-20220602173558-283122"
	W0602 17:36:33.786438  374798 addons.go:165] addon storage-provisioner should already be in state true
	I0602 17:36:33.786506  374798 host.go:66] Checking if "multinode-20220602173558-283122" exists ...
	I0602 17:36:33.787046  374798 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122 --format={{.State.Status}}
	I0602 17:36:33.787140  374798 addons.go:65] Setting default-storageclass=true in profile "multinode-20220602173558-283122"
	I0602 17:36:33.787187  374798 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20220602173558-283122"
	I0602 17:36:33.787588  374798 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122 --format={{.State.Status}}
	I0602 17:36:33.828497  374798 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 17:36:33.828818  374798 kapi.go:59] client config for multinode-20220602173558-283122: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode
-20220602173558-283122/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17122e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0602 17:36:33.829275  374798 round_trippers.go:463] GET https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0602 17:36:33.829299  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:33.829313  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:33.829328  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:33.832446  374798 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 17:36:33.834436  374798 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 17:36:33.834463  374798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0602 17:36:33.834523  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:33.836595  374798 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0602 17:36:33.836618  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:33.836628  374798 round_trippers.go:580]     Audit-Id: 5468a20c-4dd2-4eec-8614-10ec1e9d3ee6
	I0602 17:36:33.836636  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:33.836644  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:33.836654  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:33.836667  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:33.836685  374798 round_trippers.go:580]     Content-Length: 109
	I0602 17:36:33.836704  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:33 GMT
	I0602 17:36:33.836738  374798 request.go:1073] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"455"},"items":[]}
	I0602 17:36:33.837129  374798 addons.go:153] Setting addon default-storageclass=true in "multinode-20220602173558-283122"
	W0602 17:36:33.837154  374798 addons.go:165] addon default-storageclass should already be in state true
	I0602 17:36:33.837200  374798 host.go:66] Checking if "multinode-20220602173558-283122" exists ...
	I0602 17:36:33.837632  374798 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122 --format={{.State.Status}}
	I0602 17:36:33.875134  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49517 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122/id_rsa Username:docker}
	I0602 17:36:33.876471  374798 command_runner.go:130] > apiVersion: v1
	I0602 17:36:33.876496  374798 command_runner.go:130] > data:
	I0602 17:36:33.876503  374798 command_runner.go:130] >   Corefile: |
	I0602 17:36:33.876509  374798 command_runner.go:130] >     .:53 {
	I0602 17:36:33.876515  374798 command_runner.go:130] >         errors
	I0602 17:36:33.876522  374798 command_runner.go:130] >         health {
	I0602 17:36:33.876529  374798 command_runner.go:130] >            lameduck 5s
	I0602 17:36:33.876535  374798 command_runner.go:130] >         }
	I0602 17:36:33.876542  374798 command_runner.go:130] >         ready
	I0602 17:36:33.876556  374798 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0602 17:36:33.876567  374798 command_runner.go:130] >            pods insecure
	I0602 17:36:33.876578  374798 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0602 17:36:33.876587  374798 command_runner.go:130] >            ttl 30
	I0602 17:36:33.876594  374798 command_runner.go:130] >         }
	I0602 17:36:33.876605  374798 command_runner.go:130] >         prometheus :9153
	I0602 17:36:33.876617  374798 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0602 17:36:33.876625  374798 command_runner.go:130] >            max_concurrent 1000
	I0602 17:36:33.876637  374798 command_runner.go:130] >         }
	I0602 17:36:33.876647  374798 command_runner.go:130] >         cache 30
	I0602 17:36:33.876654  374798 command_runner.go:130] >         loop
	I0602 17:36:33.876664  374798 command_runner.go:130] >         reload
	I0602 17:36:33.876671  374798 command_runner.go:130] >         loadbalance
	I0602 17:36:33.876680  374798 command_runner.go:130] >     }
	I0602 17:36:33.876688  374798 command_runner.go:130] > kind: ConfigMap
	I0602 17:36:33.876692  374798 command_runner.go:130] > metadata:
	I0602 17:36:33.876703  374798 command_runner.go:130] >   creationTimestamp: "2022-06-02T17:36:20Z"
	I0602 17:36:33.876714  374798 command_runner.go:130] >   name: coredns
	I0602 17:36:33.876721  374798 command_runner.go:130] >   namespace: kube-system
	I0602 17:36:33.876737  374798 command_runner.go:130] >   resourceVersion: "284"
	I0602 17:36:33.876749  374798 command_runner.go:130] >   uid: 29709329-131e-44df-a33f-835deca75ba9
	I0602 17:36:33.876903  374798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0602 17:36:33.877209  374798 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 17:36:33.877504  374798 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0602 17:36:33.877525  374798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0602 17:36:33.877583  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:33.877530  374798 kapi.go:59] client config for multinode-20220602173558-283122: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode
-20220602173558-283122/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17122e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0602 17:36:33.877809  374798 node_ready.go:35] waiting up to 6m0s for node "multinode-20220602173558-283122" to be "Ready" ...
	I0602 17:36:33.877887  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:33.877898  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:33.877911  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:33.877925  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:33.880544  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:33.880571  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:33.880581  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:33.880590  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:33 GMT
	I0602 17:36:33.880600  374798 round_trippers.go:580]     Audit-Id: d723961b-622c-4ea2-9d7b-59a46bfd5432
	I0602 17:36:33.880609  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:33.880619  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:33.880627  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:33.880750  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:33.913635  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49517 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122/id_rsa Username:docker}
	I0602 17:36:34.048392  374798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 17:36:34.049045  374798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 17:36:34.162342  374798 command_runner.go:130] > configmap/coredns replaced
	I0602 17:36:34.162383  374798 start.go:806] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0602 17:36:34.382725  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:34.382759  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:34.382772  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:34.382782  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:34.436445  374798 round_trippers.go:574] Response Status: 200 OK in 53 milliseconds
	I0602 17:36:34.436476  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:34.436488  374798 round_trippers.go:580]     Audit-Id: be4afe3f-428d-4230-8bcb-26b5470216ec
	I0602 17:36:34.436496  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:34.436505  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:34.436514  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:34.436523  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:34.436532  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:34 GMT
	I0602 17:36:34.436661  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:34.483293  374798 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0602 17:36:34.483328  374798 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0602 17:36:34.483340  374798 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0602 17:36:34.483353  374798 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0602 17:36:34.483360  374798 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0602 17:36:34.483368  374798 command_runner.go:130] > pod/storage-provisioner created
	I0602 17:36:34.483467  374798 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0602 17:36:34.486487  374798 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0602 17:36:34.487937  374798 addons.go:417] enableAddons completed in 705.329715ms
	I0602 17:36:34.881753  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:34.881782  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:34.881792  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:34.881798  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:34.884460  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:34.884491  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:34.884500  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:34.884509  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:34.884518  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:34 GMT
	I0602 17:36:34.884527  374798 round_trippers.go:580]     Audit-Id: 8214eca4-6a2c-4773-9576-ead6cc5ca3a2
	I0602 17:36:34.884538  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:34.884546  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:34.884684  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:35.382229  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:35.382254  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:35.382263  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:35.382269  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:35.384964  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:35.384998  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:35.385027  374798 round_trippers.go:580]     Audit-Id: 0b7399db-25d6-4dd7-a0ea-fd2ccd6d26dd
	I0602 17:36:35.385039  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:35.385050  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:35.385063  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:35.385075  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:35.385093  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:35 GMT
	I0602 17:36:35.385250  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:35.881818  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:35.881846  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:35.881859  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:35.881869  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:35.884559  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:35.884601  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:35.884615  374798 round_trippers.go:580]     Audit-Id: 1c0a3a32-9450-4c1e-a800-b69505de7227
	I0602 17:36:35.884625  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:35.884636  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:35.884646  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:35.884674  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:35.884684  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:35 GMT
	I0602 17:36:35.884824  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:35.885214  374798 node_ready.go:58] node "multinode-20220602173558-283122" has status "Ready":"False"
	I0602 17:36:36.382383  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:36.382412  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:36.382424  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:36.382430  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:36.385104  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:36.385138  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:36.385151  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:36.385161  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:36.385168  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:36.385174  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:36 GMT
	I0602 17:36:36.385180  374798 round_trippers.go:580]     Audit-Id: 78c55018-e381-4482-affc-b43695cd4d31
	I0602 17:36:36.385193  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:36.385311  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:36.881881  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:36.881911  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:36.881924  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:36.881932  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:36.884312  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:36.884339  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:36.884347  374798 round_trippers.go:580]     Audit-Id: 8f713047-a40a-4c5c-8002-d7e2fdcb6a7b
	I0602 17:36:36.884353  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:36.884359  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:36.884364  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:36.884369  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:36.884374  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:36 GMT
	I0602 17:36:36.884577  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:37.381784  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:37.381811  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:37.381823  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:37.381831  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:37.384948  374798 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0602 17:36:37.385039  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:37.385060  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:37.385069  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:37.385079  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:37 GMT
	I0602 17:36:37.385089  374798 round_trippers.go:580]     Audit-Id: acc91c11-3a35-44a5-ac6d-5f81df1e7892
	I0602 17:36:37.385103  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:37.385113  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:37.385269  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:37.882340  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:37.882363  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:37.882372  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:37.882378  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:37.884935  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:37.884960  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:37.884968  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:37.884974  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:37.884980  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:37.884986  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:37.884994  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:37 GMT
	I0602 17:36:37.885002  374798 round_trippers.go:580]     Audit-Id: cc30f438-39cd-480f-b557-86be325b2401
	I0602 17:36:37.885151  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:37.885470  374798 node_ready.go:58] node "multinode-20220602173558-283122" has status "Ready":"False"
	I0602 17:36:38.382132  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:38.382155  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:38.382164  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:38.382173  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:38.384600  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:38.384631  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:38.384642  374798 round_trippers.go:580]     Audit-Id: ceb14e74-25b6-49af-9a63-2ae776c0038a
	I0602 17:36:38.384650  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:38.384659  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:38.384667  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:38.384684  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:38.384710  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:38 GMT
	I0602 17:36:38.384842  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:38.882277  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:38.882304  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:38.882313  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:38.882319  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:38.884766  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:38.884796  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:38.884807  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:38.884816  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:38 GMT
	I0602 17:36:38.884826  374798 round_trippers.go:580]     Audit-Id: 43bbdb5c-78f8-470d-be90-044079b09979
	I0602 17:36:38.884835  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:38.884847  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:38.884856  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:38.884938  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:39.382268  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:39.382293  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:39.382305  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:39.382314  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:39.384941  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:39.384973  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:39.384984  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:39.384992  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:39.385001  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:39.385025  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:39.385034  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:39 GMT
	I0602 17:36:39.385047  374798 round_trippers.go:580]     Audit-Id: c57b02ee-3a67-498a-940c-538cc59c9453
	I0602 17:36:39.385162  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:39.882475  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:39.882504  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:39.882513  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:39.882520  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:39.885049  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:39.885077  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:39.885085  374798 round_trippers.go:580]     Audit-Id: 18978d43-e2b1-455e-9fc8-7f151e5f73d9
	I0602 17:36:39.885091  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:39.885096  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:39.885102  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:39.885107  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:39.885113  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:39 GMT
	I0602 17:36:39.885261  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:39.885589  374798 node_ready.go:58] node "multinode-20220602173558-283122" has status "Ready":"False"
	I0602 17:36:40.381716  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:40.381743  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:40.381752  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:40.381758  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:40.384943  374798 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0602 17:36:40.384973  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:40.384984  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:40 GMT
	I0602 17:36:40.384992  374798 round_trippers.go:580]     Audit-Id: ccefe130-264f-439d-b847-b78f155330df
	I0602 17:36:40.385000  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:40.385027  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:40.385038  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:40.385052  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:40.385162  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:40.881784  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:40.881812  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:40.881825  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:40.881836  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:40.884219  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:40.884247  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:40.884260  374798 round_trippers.go:580]     Audit-Id: 00fe19e7-9b18-44ec-8aae-ccb952b4c770
	I0602 17:36:40.884270  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:40.884279  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:40.884288  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:40.884302  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:40.884314  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:40 GMT
	I0602 17:36:40.884444  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:41.381898  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:41.381925  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:41.381935  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:41.381941  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:41.384644  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:41.384669  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:41.384680  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:41.384691  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:41.384700  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:41.384713  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:41.384732  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:41 GMT
	I0602 17:36:41.384741  374798 round_trippers.go:580]     Audit-Id: 5b21a3d2-1435-4963-8388-b858c3a7c7f2
	I0602 17:36:41.384870  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:41.882270  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:41.882295  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:41.882304  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:41.882310  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:41.884892  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:41.884928  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:41.884938  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:41.884945  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:41 GMT
	I0602 17:36:41.884950  374798 round_trippers.go:580]     Audit-Id: 339cbd7d-9077-4c5e-abae-6364661fdbe1
	I0602 17:36:41.884955  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:41.884961  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:41.884966  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:41.885116  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"486","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 4896 chars]
	I0602 17:36:41.885520  374798 node_ready.go:49] node "multinode-20220602173558-283122" has status "Ready":"True"
	I0602 17:36:41.885550  374798 node_ready.go:38] duration metric: took 8.007719429s waiting for node "multinode-20220602173558-283122" to be "Ready" ...
	I0602 17:36:41.885565  374798 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 17:36:41.885659  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0602 17:36:41.885675  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:41.885686  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:41.885696  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:41.889487  374798 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0602 17:36:41.889519  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:41.889532  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:41.889542  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:41.889551  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:41.889561  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:41 GMT
	I0602 17:36:41.889570  374798 round_trippers.go:580]     Audit-Id: 822bdea6-7c93-4e70-8bf6-4adb46c14093
	I0602 17:36:41.889582  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:41.890064  374798 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"492"},"items":[{"metadata":{"name":"coredns-64897985d-l5jxv","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"d796da5e-d4e3-4761-84e2-c742ea94211a","resourceVersion":"492","creationTimestamp":"2022-06-02T17:36:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"20656fc2-6eb0-4735-8ce1-d983b9816977","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20656fc2-6eb0-4735-8ce1-d983b9816977\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:a
rgs":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{}," [truncated 55559 chars]
	I0602 17:36:41.893684  374798 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-l5jxv" in "kube-system" namespace to be "Ready" ...
	I0602 17:36:41.893770  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-64897985d-l5jxv
	I0602 17:36:41.893783  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:41.893793  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:41.893805  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:41.896166  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:41.896191  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:41.896202  374798 round_trippers.go:580]     Audit-Id: 77625ecc-db4c-428d-8185-dac7f7506624
	I0602 17:36:41.896211  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:41.896253  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:41.896262  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:41.896272  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:41.896280  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:41 GMT
	I0602 17:36:41.896436  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-64897985d-l5jxv","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"d796da5e-d4e3-4761-84e2-c742ea94211a","resourceVersion":"492","creationTimestamp":"2022-06-02T17:36:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"20656fc2-6eb0-4735-8ce1-d983b9816977","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20656fc2-6eb0-4735-8ce1-d983b9816977\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:live
nessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path" [truncated 5852 chars]
	I0602 17:36:41.897095  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:41.897121  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:41.897134  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:41.897145  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:41.899440  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:41.899464  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:41.899475  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:41.899484  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:41.899499  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:41 GMT
	I0602 17:36:41.899513  374798 round_trippers.go:580]     Audit-Id: 5415fdb7-ea2f-4059-ac32-e38c587a15c5
	I0602 17:36:41.899525  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:41.899535  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:41.899645  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"486","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 4896 chars]
	I0602 17:36:42.400904  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-64897985d-l5jxv
	I0602 17:36:42.400936  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:42.400947  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:42.400957  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:42.403451  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:42.403481  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:42.403493  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:42.403502  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:42.403515  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:42.403528  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:42 GMT
	I0602 17:36:42.403537  374798 round_trippers.go:580]     Audit-Id: 3705a7ba-f89f-40a0-8741-17a77d8c14f2
	I0602 17:36:42.403542  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:42.403650  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-64897985d-l5jxv","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"d796da5e-d4e3-4761-84e2-c742ea94211a","resourceVersion":"492","creationTimestamp":"2022-06-02T17:36:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"20656fc2-6eb0-4735-8ce1-d983b9816977","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20656fc2-6eb0-4735-8ce1-d983b9816977\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:live
nessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path" [truncated 5852 chars]
	I0602 17:36:42.404117  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:42.404135  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:42.404144  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:42.404150  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:42.406145  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:36:42.406166  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:42.406178  374798 round_trippers.go:580]     Audit-Id: 490707a4-7c37-49b5-a7a9-9a06dc2599ee
	I0602 17:36:42.406187  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:42.406203  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:42.406216  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:42.406229  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:42.406242  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:42 GMT
	I0602 17:36:42.406341  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"486","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 4896 chars]
	I0602 17:36:42.900969  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-64897985d-l5jxv
	I0602 17:36:42.901001  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:42.901026  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:42.901038  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:42.903594  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:42.903630  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:42.903639  374798 round_trippers.go:580]     Audit-Id: ecfab05a-579f-4fdc-be30-6bdc5e7ad588
	I0602 17:36:42.903646  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:42.903653  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:42.903659  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:42.903667  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:42.903676  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:42 GMT
	I0602 17:36:42.903862  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-64897985d-l5jxv","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"d796da5e-d4e3-4761-84e2-c742ea94211a","resourceVersion":"504","creationTimestamp":"2022-06-02T17:36:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"20656fc2-6eb0-4735-8ce1-d983b9816977","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20656fc2-6eb0-4735-8ce1-d983b9816977\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:live
nessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path" [truncated 5979 chars]
	I0602 17:36:42.904318  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:42.904332  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:42.904340  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:42.904346  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:42.908145  374798 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0602 17:36:42.908174  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:42.908185  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:42.908193  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:42.908203  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:42.908217  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:42.908229  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:42 GMT
	I0602 17:36:42.908244  374798 round_trippers.go:580]     Audit-Id: 43fe0ab3-9516-4693-adba-af4d86f892c2
	I0602 17:36:42.908352  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"486","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 4896 chars]
	I0602 17:36:42.908675  374798 pod_ready.go:92] pod "coredns-64897985d-l5jxv" in "kube-system" namespace has status "Ready":"True"
	I0602 17:36:42.908702  374798 pod_ready.go:81] duration metric: took 1.014987777s waiting for pod "coredns-64897985d-l5jxv" in "kube-system" namespace to be "Ready" ...
	I0602 17:36:42.908716  374798 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:36:42.908778  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20220602173558-283122
	I0602 17:36:42.908789  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:42.908801  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:42.908811  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:42.910714  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:36:42.910734  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:42.910741  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:42.910754  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:42.910771  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:42.910776  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:42 GMT
	I0602 17:36:42.910785  374798 round_trippers.go:580]     Audit-Id: f83c3364-3cb8-4451-b9bf-39c1e747eea7
	I0602 17:36:42.910791  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:42.910870  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220602173558-283122","namespace":"kube-system","uid":"2de3dc57-6748-4c20-bf65-3b2cbd2f8a0f","resourceVersion":"331","creationTimestamp":"2022-06-02T17:36:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"de8e89a254ff48460c714894f4297613","kubernetes.io/config.mirror":"de8e89a254ff48460c714894f4297613","kubernetes.io/config.seen":"2022-06-02T17:36:20.948851860Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:21Z","fieldsType":"FieldsV1","
fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes. [truncated 5804 chars]
	I0602 17:36:42.911245  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:42.911264  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:42.911274  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:42.911284  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:42.912971  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:36:42.912997  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:42.913007  374798 round_trippers.go:580]     Audit-Id: ba2b8761-4995-451f-8bd5-a8b8676f8068
	I0602 17:36:42.913044  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:42.913056  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:42.913071  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:42.913084  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:42.913097  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:42 GMT
	I0602 17:36:42.913181  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"486","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 4896 chars]
	I0602 17:36:42.913448  374798 pod_ready.go:92] pod "etcd-multinode-20220602173558-283122" in "kube-system" namespace has status "Ready":"True"
	I0602 17:36:42.913458  374798 pod_ready.go:81] duration metric: took 4.73125ms waiting for pod "etcd-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:36:42.913487  374798 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:36:42.913534  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220602173558-283122
	I0602 17:36:42.913544  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:42.913550  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:42.913557  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:42.915483  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:36:42.915512  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:42.915524  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:42.915534  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:42.915549  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:42.915562  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:42 GMT
	I0602 17:36:42.915577  374798 round_trippers.go:580]     Audit-Id: 436a6ae5-9eed-4887-a032-fad125d6652c
	I0602 17:36:42.915591  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:42.915761  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220602173558-283122","namespace":"kube-system","uid":"999b4342-ad8c-46aa-a5a0-bdd14089e393","resourceVersion":"326","creationTimestamp":"2022-06-02T17:36:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"9b88133a9e18b6ec7d53499c3c2debcc","kubernetes.io/config.mirror":"9b88133a9e18b6ec7d53499c3c2debcc","kubernetes.io/config.seen":"2022-06-02T17:36:13.934512571Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:20Z
","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{". [truncated 8313 chars]
	I0602 17:36:42.916327  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:42.916344  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:42.916356  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:42.916373  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:42.918167  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:36:42.918191  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:42.918198  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:42 GMT
	I0602 17:36:42.918203  374798 round_trippers.go:580]     Audit-Id: e56b1373-dd97-4e77-834f-6cc07d70be06
	I0602 17:36:42.918208  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:42.918213  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:42.918218  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:42.918229  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:42.918299  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"486","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 4896 chars]
	I0602 17:36:42.918572  374798 pod_ready.go:92] pod "kube-apiserver-multinode-20220602173558-283122" in "kube-system" namespace has status "Ready":"True"
	I0602 17:36:42.918582  374798 pod_ready.go:81] duration metric: took 5.085479ms waiting for pod "kube-apiserver-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:36:42.918590  374798 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:36:42.918672  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220602173558-283122
	I0602 17:36:42.918684  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:42.918691  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:42.918697  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:42.920425  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:36:42.920446  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:42.920454  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:42.920459  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:42.920466  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:42.920475  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:42 GMT
	I0602 17:36:42.920492  374798 round_trippers.go:580]     Audit-Id: d3343b46-d1d3-4f0a-b863-66487bb1200b
	I0602 17:36:42.920502  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:42.920654  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220602173558-283122","namespace":"kube-system","uid":"dc0ed8b1-4d22-46e1-a708-fee470e6c6fe","resourceVersion":"330","creationTimestamp":"2022-06-02T17:36:21Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"73385622a1da8f9c02cb3f38e98edff7","kubernetes.io/config.mirror":"73385622a1da8f9c02cb3f38e98edff7","kubernetes.io/config.seen":"2022-06-02T17:36:20.948871153Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":
{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror [truncated 7888 chars]
	I0602 17:36:42.921122  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:42.921138  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:42.921145  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:42.921151  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:42.922746  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:36:42.922769  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:42.922780  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:42.922789  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:42.922798  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:42.922815  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:42 GMT
	I0602 17:36:42.922823  374798 round_trippers.go:580]     Audit-Id: 8426c5dc-f4c9-4c8a-a3c8-7ba2fe486e5f
	I0602 17:36:42.922837  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:42.922919  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"486","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 4896 chars]
	I0602 17:36:42.923203  374798 pod_ready.go:92] pod "kube-controller-manager-multinode-20220602173558-283122" in "kube-system" namespace has status "Ready":"True"
	I0602 17:36:42.923217  374798 pod_ready.go:81] duration metric: took 4.617927ms waiting for pod "kube-controller-manager-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:36:42.923225  374798 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q8c4p" in "kube-system" namespace to be "Ready" ...
	I0602 17:36:42.923264  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8c4p
	I0602 17:36:42.923272  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:42.923279  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:42.923285  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:42.924944  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:36:42.924964  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:42.924972  374798 round_trippers.go:580]     Audit-Id: b616c995-8018-4bb2-97ce-fecc1ade34cb
	I0602 17:36:42.924981  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:42.924988  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:42.924997  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:42.925034  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:42.925048  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:42 GMT
	I0602 17:36:42.925133  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q8c4p","generateName":"kube-proxy-","namespace":"kube-system","uid":"f1878b35-b1dd-4c80-b1c8-6848ceeac02c","resourceVersion":"475","creationTimestamp":"2022-06-02T17:36:33Z","labels":{"controller-revision-hash":"549f7469d9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fdf29778-5a78-41df-8413-a6f3417a1d56","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fdf29778-5a78-41df-8413-a6f3417a1d56\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5544 chars]
	I0602 17:36:43.082925  374798 request.go:533] Waited for 157.393761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:43.082990  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:43.082996  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:43.083004  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:43.083011  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:43.085682  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:43.085714  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:43.085724  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:43 GMT
	I0602 17:36:43.085733  374798 round_trippers.go:580]     Audit-Id: 03d7d418-fca5-41d5-bc3b-74accadba0d8
	I0602 17:36:43.085743  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:43.085752  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:43.085761  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:43.085773  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:43.085883  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"486","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 4896 chars]
	I0602 17:36:43.086286  374798 pod_ready.go:92] pod "kube-proxy-q8c4p" in "kube-system" namespace has status "Ready":"True"
	I0602 17:36:43.086304  374798 pod_ready.go:81] duration metric: took 163.072826ms waiting for pod "kube-proxy-q8c4p" in "kube-system" namespace to be "Ready" ...
	I0602 17:36:43.086314  374798 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:36:43.282466  374798 request.go:533] Waited for 196.055078ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220602173558-283122
	I0602 17:36:43.282533  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220602173558-283122
	I0602 17:36:43.282540  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:43.282549  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:43.282559  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:43.284908  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:43.284929  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:43.284936  374798 round_trippers.go:580]     Audit-Id: 4f2bde1b-7804-42d5-a0eb-4e1ad1850442
	I0602 17:36:43.284942  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:43.284948  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:43.284952  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:43.284958  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:43.284963  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:43 GMT
	I0602 17:36:43.285111  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220602173558-283122","namespace":"kube-system","uid":"b207e4b1-a64d-4aaf-bd7b-5eaec8e23004","resourceVersion":"366","creationTimestamp":"2022-06-02T17:36:21Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"16c53ed8f606fa43c821fa27956bef6a","kubernetes.io/config.mirror":"16c53ed8f606fa43c821fa27956bef6a","kubernetes.io/config.seen":"2022-06-02T17:36:20.948872903Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kuberne
tes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes [truncated 4770 chars]
	I0602 17:36:43.482565  374798 request.go:533] Waited for 197.034057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:43.482640  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:43.482647  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:43.482659  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:43.482668  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:43.485181  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:43.485209  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:43.485216  374798 round_trippers.go:580]     Audit-Id: ed18adef-a534-489a-9f90-6be29c81d2a0
	I0602 17:36:43.485222  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:43.485227  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:43.485233  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:43.485238  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:43.485243  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:43 GMT
	I0602 17:36:43.485389  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"486","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 4896 chars]
	I0602 17:36:43.485803  374798 pod_ready.go:92] pod "kube-scheduler-multinode-20220602173558-283122" in "kube-system" namespace has status "Ready":"True"
	I0602 17:36:43.485821  374798 pod_ready.go:81] duration metric: took 399.498567ms waiting for pod "kube-scheduler-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:36:43.485835  374798 pod_ready.go:38] duration metric: took 1.600244091s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 17:36:43.485868  374798 api_server.go:51] waiting for apiserver process to appear ...
	I0602 17:36:43.485921  374798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 17:36:43.495835  374798 command_runner.go:130] > 1706
	I0602 17:36:43.496693  374798 api_server.go:71] duration metric: took 9.714421861s to wait for apiserver process to appear ...
	I0602 17:36:43.496726  374798 api_server.go:87] waiting for apiserver healthz status ...
	I0602 17:36:43.496739  374798 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0602 17:36:43.501406  374798 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0602 17:36:43.501469  374798 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0602 17:36:43.501477  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:43.501490  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:43.501500  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:43.502296  374798 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0602 17:36:43.502317  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:43.502324  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:43.502330  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:43.502336  374798 round_trippers.go:580]     Content-Length: 263
	I0602 17:36:43.502344  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:43 GMT
	I0602 17:36:43.502353  374798 round_trippers.go:580]     Audit-Id: 419a73b5-8362-4424-b5c3-b4d710067b29
	I0602 17:36:43.502363  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:43.502378  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:43.502399  374798 request.go:1073] Response Body: {
	  "major": "1",
	  "minor": "23",
	  "gitVersion": "v1.23.6",
	  "gitCommit": "ad3338546da947756e8a88aa6822e9c11e7eac22",
	  "gitTreeState": "clean",
	  "buildDate": "2022-04-14T08:43:11Z",
	  "goVersion": "go1.17.9",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0602 17:36:43.502499  374798 api_server.go:140] control plane version: v1.23.6
	I0602 17:36:43.502515  374798 api_server.go:130] duration metric: took 5.782895ms to wait for apiserver health ...
	I0602 17:36:43.502524  374798 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 17:36:43.682929  374798 request.go:533] Waited for 180.322156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0602 17:36:43.683002  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0602 17:36:43.683007  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:43.683016  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:43.683024  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:43.686518  374798 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0602 17:36:43.686544  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:43.686552  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:43.686558  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:43.686564  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:43.686572  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:43 GMT
	I0602 17:36:43.686581  374798 round_trippers.go:580]     Audit-Id: 806d015f-7669-4f06-b512-26ed9d66431a
	I0602 17:36:43.686590  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:43.687066  374798 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"509"},"items":[{"metadata":{"name":"coredns-64897985d-l5jxv","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"d796da5e-d4e3-4761-84e2-c742ea94211a","resourceVersion":"504","creationTimestamp":"2022-06-02T17:36:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"20656fc2-6eb0-4735-8ce1-d983b9816977","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20656fc2-6eb0-4735-8ce1-d983b9816977\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:a
rgs":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{}," [truncated 55670 chars]
	I0602 17:36:43.689544  374798 system_pods.go:59] 8 kube-system pods found
	I0602 17:36:43.689576  374798 system_pods.go:61] "coredns-64897985d-l5jxv" [d796da5e-d4e3-4761-84e2-c742ea94211a] Running
	I0602 17:36:43.689585  374798 system_pods.go:61] "etcd-multinode-20220602173558-283122" [2de3dc57-6748-4c20-bf65-3b2cbd2f8a0f] Running
	I0602 17:36:43.689594  374798 system_pods.go:61] "kindnet-d4jwl" [02c02672-9134-4bb8-abdb-c15c1f3334ac] Running
	I0602 17:36:43.689605  374798 system_pods.go:61] "kube-apiserver-multinode-20220602173558-283122" [999b4342-ad8c-46aa-a5a0-bdd14089e393] Running
	I0602 17:36:43.689610  374798 system_pods.go:61] "kube-controller-manager-multinode-20220602173558-283122" [dc0ed8b1-4d22-46e1-a708-fee470e6c6fe] Running
	I0602 17:36:43.689623  374798 system_pods.go:61] "kube-proxy-q8c4p" [f1878b35-b1dd-4c80-b1c8-6848ceeac02c] Running
	I0602 17:36:43.689634  374798 system_pods.go:61] "kube-scheduler-multinode-20220602173558-283122" [b207e4b1-a64d-4aaf-bd7b-5eaec8e23004] Running
	I0602 17:36:43.689645  374798 system_pods.go:61] "storage-provisioner" [8a59fe44-28d0-431f-9a48-d4f0705d7d5a] Running
	I0602 17:36:43.689656  374798 system_pods.go:74] duration metric: took 187.120137ms to wait for pod list to return data ...
	I0602 17:36:43.689670  374798 default_sa.go:34] waiting for default service account to be created ...
	I0602 17:36:43.883111  374798 request.go:533] Waited for 193.354977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0602 17:36:43.883190  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0602 17:36:43.883195  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:43.883203  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:43.883210  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:43.885786  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:43.885817  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:43.885829  374798 round_trippers.go:580]     Content-Length: 304
	I0602 17:36:43.885838  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:43 GMT
	I0602 17:36:43.885847  374798 round_trippers.go:580]     Audit-Id: 868485cc-c800-4162-a124-56b1f73a5702
	I0602 17:36:43.885861  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:43.885875  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:43.885885  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:43.885898  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:43.885930  374798 request.go:1073] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"509"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"756fbbad-4c5e-4dbf-b9ed-2ce106f7008e","resourceVersion":"400","creationTimestamp":"2022-06-02T17:36:32Z"},"secrets":[{"name":"default-token-jdglt"}]}]}
	I0602 17:36:43.886187  374798 default_sa.go:45] found service account: "default"
	I0602 17:36:43.886206  374798 default_sa.go:55] duration metric: took 196.526159ms for default service account to be created ...
	I0602 17:36:43.886219  374798 system_pods.go:116] waiting for k8s-apps to be running ...
	I0602 17:36:44.082682  374798 request.go:533] Waited for 196.347313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0602 17:36:44.082750  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0602 17:36:44.082759  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:44.082772  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:44.082792  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:44.086440  374798 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0602 17:36:44.086467  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:44.086475  374798 round_trippers.go:580]     Audit-Id: 820a461e-8614-4528-afd2-8a5a29ee3a6f
	I0602 17:36:44.086480  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:44.086486  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:44.086491  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:44.086496  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:44.086501  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:44 GMT
	I0602 17:36:44.086986  374798 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"509"},"items":[{"metadata":{"name":"coredns-64897985d-l5jxv","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"d796da5e-d4e3-4761-84e2-c742ea94211a","resourceVersion":"504","creationTimestamp":"2022-06-02T17:36:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"20656fc2-6eb0-4735-8ce1-d983b9816977","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20656fc2-6eb0-4735-8ce1-d983b9816977\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:a
rgs":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{}," [truncated 55670 chars]
	I0602 17:36:44.089539  374798 system_pods.go:86] 8 kube-system pods found
	I0602 17:36:44.089573  374798 system_pods.go:89] "coredns-64897985d-l5jxv" [d796da5e-d4e3-4761-84e2-c742ea94211a] Running
	I0602 17:36:44.089582  374798 system_pods.go:89] "etcd-multinode-20220602173558-283122" [2de3dc57-6748-4c20-bf65-3b2cbd2f8a0f] Running
	I0602 17:36:44.089589  374798 system_pods.go:89] "kindnet-d4jwl" [02c02672-9134-4bb8-abdb-c15c1f3334ac] Running
	I0602 17:36:44.089595  374798 system_pods.go:89] "kube-apiserver-multinode-20220602173558-283122" [999b4342-ad8c-46aa-a5a0-bdd14089e393] Running
	I0602 17:36:44.089600  374798 system_pods.go:89] "kube-controller-manager-multinode-20220602173558-283122" [dc0ed8b1-4d22-46e1-a708-fee470e6c6fe] Running
	I0602 17:36:44.089611  374798 system_pods.go:89] "kube-proxy-q8c4p" [f1878b35-b1dd-4c80-b1c8-6848ceeac02c] Running
	I0602 17:36:44.089623  374798 system_pods.go:89] "kube-scheduler-multinode-20220602173558-283122" [b207e4b1-a64d-4aaf-bd7b-5eaec8e23004] Running
	I0602 17:36:44.089632  374798 system_pods.go:89] "storage-provisioner" [8a59fe44-28d0-431f-9a48-d4f0705d7d5a] Running
	I0602 17:36:44.089642  374798 system_pods.go:126] duration metric: took 203.41266ms to wait for k8s-apps to be running ...
	I0602 17:36:44.089656  374798 system_svc.go:44] waiting for kubelet service to be running ....
	I0602 17:36:44.089712  374798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 17:36:44.099440  374798 system_svc.go:56] duration metric: took 9.774081ms WaitForService to wait for kubelet.
	I0602 17:36:44.099473  374798 kubeadm.go:572] duration metric: took 10.317204042s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0602 17:36:44.099501  374798 node_conditions.go:102] verifying NodePressure condition ...
	I0602 17:36:44.282943  374798 request.go:533] Waited for 183.345954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0602 17:36:44.283014  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0602 17:36:44.283022  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:44.283032  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:44.283039  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:44.285457  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:44.285478  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:44.285485  374798 round_trippers.go:580]     Audit-Id: 8639181d-4b44-434d-9cbd-b521c4a33faa
	I0602 17:36:44.285490  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:44.285496  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:44.285502  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:44.285511  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:44.285519  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:44 GMT
	I0602 17:36:44.285616  374798 request.go:1073] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"510"},"items":[{"metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"486","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"
0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"ma [truncated 4949 chars]
	I0602 17:36:44.286014  374798 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0602 17:36:44.286035  374798 node_conditions.go:123] node cpu capacity is 8
	I0602 17:36:44.286057  374798 node_conditions.go:105] duration metric: took 186.549549ms to run NodePressure ...
	I0602 17:36:44.286072  374798 start.go:213] waiting for startup goroutines ...
	I0602 17:36:44.288611  374798 out.go:177] 
	I0602 17:36:44.290446  374798 config.go:178] Loaded profile config "multinode-20220602173558-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 17:36:44.290537  374798 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/config.json ...
	I0602 17:36:44.292480  374798 out.go:177] * Starting worker node multinode-20220602173558-283122-m02 in cluster multinode-20220602173558-283122
	I0602 17:36:44.294454  374798 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 17:36:44.295822  374798 out.go:177] * Pulling base image ...
	I0602 17:36:44.297124  374798 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 17:36:44.297147  374798 cache.go:57] Caching tarball of preloaded images
	I0602 17:36:44.297214  374798 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 17:36:44.297279  374798 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 17:36:44.297302  374798 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 17:36:44.297389  374798 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/config.json ...
	I0602 17:36:44.341787  374798 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 17:36:44.341821  374798 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 17:36:44.341836  374798 cache.go:206] Successfully downloaded all kic artifacts
	I0602 17:36:44.341872  374798 start.go:352] acquiring machines lock for multinode-20220602173558-283122-m02: {Name:mke593d81dca3b8fcdc7cb8fbeba179c36b6a97d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 17:36:44.342010  374798 start.go:356] acquired machines lock for "multinode-20220602173558-283122-m02" in 117.787µs
	I0602 17:36:44.342036  374798 start.go:91] Provisioning new machine with config: &{Name:multinode-20220602173558-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220602173558-283122 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name:m02 IP: Port:0 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0602 17:36:44.342114  374798 start.go:131] createHost starting for "m02" (driver="docker")
	I0602 17:36:44.344583  374798 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0602 17:36:44.344676  374798 start.go:165] libmachine.API.Create for "multinode-20220602173558-283122" (driver="docker")
	I0602 17:36:44.344705  374798 client.go:168] LocalClient.Create starting
	I0602 17:36:44.344774  374798 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem
	I0602 17:36:44.344801  374798 main.go:134] libmachine: Decoding PEM data...
	I0602 17:36:44.344819  374798 main.go:134] libmachine: Parsing certificate...
	I0602 17:36:44.344879  374798 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem
	I0602 17:36:44.344902  374798 main.go:134] libmachine: Decoding PEM data...
	I0602 17:36:44.344915  374798 main.go:134] libmachine: Parsing certificate...
	I0602 17:36:44.345145  374798 cli_runner.go:164] Run: docker network inspect multinode-20220602173558-283122 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 17:36:44.375349  374798 network_create.go:76] Found existing network {name:multinode-20220602173558-283122 subnet:0xc000a9c000 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0602 17:36:44.375419  374798 kic.go:106] calculated static IP "192.168.49.3" for the "multinode-20220602173558-283122-m02" container
	I0602 17:36:44.375473  374798 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0602 17:36:44.406475  374798 cli_runner.go:164] Run: docker volume create multinode-20220602173558-283122-m02 --label name.minikube.sigs.k8s.io=multinode-20220602173558-283122-m02 --label created_by.minikube.sigs.k8s.io=true
	I0602 17:36:44.440789  374798 oci.go:103] Successfully created a docker volume multinode-20220602173558-283122-m02
	I0602 17:36:44.440871  374798 cli_runner.go:164] Run: docker run --rm --name multinode-20220602173558-283122-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20220602173558-283122-m02 --entrypoint /usr/bin/test -v multinode-20220602173558-283122-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib
	I0602 17:36:44.986137  374798 oci.go:107] Successfully prepared a docker volume multinode-20220602173558-283122-m02
	I0602 17:36:44.986188  374798 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 17:36:44.986211  374798 kic.go:179] Starting extracting preloaded images to volume ...
	I0602 17:36:44.986271  374798 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20220602173558-283122-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir
	I0602 17:36:51.696213  374798 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20220602173558-283122-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir: (6.709881411s)
	I0602 17:36:51.696251  374798 kic.go:188] duration metric: took 6.710035 seconds to extract preloaded images to volume
	W0602 17:36:51.696370  374798 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0602 17:36:51.696485  374798 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0602 17:36:51.801401  374798 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-20220602173558-283122-m02 --name multinode-20220602173558-283122-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20220602173558-283122-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-20220602173558-283122-m02 --network multinode-20220602173558-283122 --ip 192.168.49.3 --volume multinode-20220602173558-283122-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496
	I0602 17:36:52.211385  374798 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122-m02 --format={{.State.Running}}
	I0602 17:36:52.247826  374798 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122-m02 --format={{.State.Status}}
	I0602 17:36:52.282505  374798 cli_runner.go:164] Run: docker exec multinode-20220602173558-283122-m02 stat /var/lib/dpkg/alternatives/iptables
	I0602 17:36:52.345188  374798 oci.go:247] the created container "multinode-20220602173558-283122-m02" has a running status.
	I0602 17:36:52.345242  374798 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122-m02/id_rsa...
	I0602 17:36:52.597364  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0602 17:36:52.597413  374798 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0602 17:36:52.690257  374798 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122-m02 --format={{.State.Status}}
	I0602 17:36:52.725456  374798 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0602 17:36:52.725482  374798 kic_runner.go:114] Args: [docker exec --privileged multinode-20220602173558-283122-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0602 17:36:52.811992  374798 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122-m02 --format={{.State.Status}}
	I0602 17:36:52.846024  374798 machine.go:88] provisioning docker machine ...
	I0602 17:36:52.846070  374798 ubuntu.go:169] provisioning hostname "multinode-20220602173558-283122-m02"
	I0602 17:36:52.846134  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122-m02
	I0602 17:36:52.880202  374798 main.go:134] libmachine: Using SSH client type: native
	I0602 17:36:52.880409  374798 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49522 <nil> <nil>}
	I0602 17:36:52.880435  374798 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-20220602173558-283122-m02 && echo "multinode-20220602173558-283122-m02" | sudo tee /etc/hostname
	I0602 17:36:53.006413  374798 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-20220602173558-283122-m02
	
	I0602 17:36:53.006499  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122-m02
	I0602 17:36:53.039516  374798 main.go:134] libmachine: Using SSH client type: native
	I0602 17:36:53.039740  374798 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49522 <nil> <nil>}
	I0602 17:36:53.039776  374798 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20220602173558-283122-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20220602173558-283122-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20220602173558-283122-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 17:36:53.153468  374798 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 17:36:53.153499  374798 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem
ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 17:36:53.153520  374798 ubuntu.go:177] setting up certificates
	I0602 17:36:53.153530  374798 provision.go:83] configureAuth start
	I0602 17:36:53.153578  374798 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220602173558-283122-m02
	I0602 17:36:53.186533  374798 provision.go:138] copyHostCerts
	I0602 17:36:53.186586  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 17:36:53.186621  374798 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
	I0602 17:36:53.186635  374798 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 17:36:53.186713  374798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 17:36:53.186791  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 17:36:53.186812  374798 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
	I0602 17:36:53.186821  374798 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 17:36:53.186848  374798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 17:36:53.186897  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 17:36:53.186921  374798 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
	I0602 17:36:53.186930  374798 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 17:36:53.186957  374798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1679 bytes)
	I0602 17:36:53.187006  374798 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.multinode-20220602173558-283122-m02 san=[192.168.49.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-20220602173558-283122-m02]
	I0602 17:36:53.299400  374798 provision.go:172] copyRemoteCerts
	I0602 17:36:53.299469  374798 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 17:36:53.299508  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122-m02
	I0602 17:36:53.331449  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49522 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122-m02/id_rsa Username:docker}
	I0602 17:36:53.420721  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0602 17:36:53.420803  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 17:36:53.439328  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0602 17:36:53.439402  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0602 17:36:53.458058  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0602 17:36:53.458122  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0602 17:36:53.475939  374798 provision.go:86] duration metric: configureAuth took 322.394179ms
	I0602 17:36:53.475975  374798 ubuntu.go:193] setting minikube options for container-runtime
	I0602 17:36:53.476199  374798 config.go:178] Loaded profile config "multinode-20220602173558-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 17:36:53.476263  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122-m02
	I0602 17:36:53.507994  374798 main.go:134] libmachine: Using SSH client type: native
	I0602 17:36:53.508159  374798 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49522 <nil> <nil>}
	I0602 17:36:53.508177  374798 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 17:36:53.621424  374798 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 17:36:53.621455  374798 ubuntu.go:71] root file system type: overlay
	I0602 17:36:53.621617  374798 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 17:36:53.621674  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122-m02
	I0602 17:36:53.654568  374798 main.go:134] libmachine: Using SSH client type: native
	I0602 17:36:53.654763  374798 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49522 <nil> <nil>}
	I0602 17:36:53.654863  374798 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 17:36:53.778683  374798 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 17:36:53.778777  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122-m02
	I0602 17:36:53.812021  374798 main.go:134] libmachine: Using SSH client type: native
	I0602 17:36:53.812202  374798 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49522 <nil> <nil>}
	I0602 17:36:53.812231  374798 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 17:36:54.467417  374798 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-02 17:36:53.775987983 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+Environment=NO_PROXY=192.168.49.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0602 17:36:54.467453  374798 machine.go:91] provisioned docker machine in 1.621404077s
	I0602 17:36:54.467464  374798 client.go:171] LocalClient.Create took 10.122750699s
	I0602 17:36:54.467483  374798 start.go:173] duration metric: libmachine.API.Create for "multinode-20220602173558-283122" took 10.122803983s
	I0602 17:36:54.467492  374798 start.go:306] post-start starting for "multinode-20220602173558-283122-m02" (driver="docker")
	I0602 17:36:54.467500  374798 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 17:36:54.467568  374798 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 17:36:54.467619  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122-m02
	I0602 17:36:54.498860  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49522 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122-m02/id_rsa Username:docker}
	I0602 17:36:54.584793  374798 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 17:36:54.587757  374798 command_runner.go:130] > NAME="Ubuntu"
	I0602 17:36:54.587790  374798 command_runner.go:130] > VERSION="20.04.4 LTS (Focal Fossa)"
	I0602 17:36:54.587798  374798 command_runner.go:130] > ID=ubuntu
	I0602 17:36:54.587806  374798 command_runner.go:130] > ID_LIKE=debian
	I0602 17:36:54.587819  374798 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.4 LTS"
	I0602 17:36:54.587827  374798 command_runner.go:130] > VERSION_ID="20.04"
	I0602 17:36:54.587841  374798 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0602 17:36:54.587850  374798 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0602 17:36:54.587858  374798 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0602 17:36:54.587871  374798 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0602 17:36:54.587878  374798 command_runner.go:130] > VERSION_CODENAME=focal
	I0602 17:36:54.587883  374798 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0602 17:36:54.587952  374798 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 17:36:54.587971  374798 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 17:36:54.587981  374798 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 17:36:54.587987  374798 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 17:36:54.588002  374798 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 17:36:54.588064  374798 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 17:36:54.588135  374798 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem -> 2831222.pem in /etc/ssl/certs
	I0602 17:36:54.588148  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem -> /etc/ssl/certs/2831222.pem
	I0602 17:36:54.588240  374798 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 17:36:54.595360  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem --> /etc/ssl/certs/2831222.pem (1708 bytes)
	I0602 17:36:54.612951  374798 start.go:309] post-start completed in 145.441924ms
	I0602 17:36:54.613330  374798 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220602173558-283122-m02
	I0602 17:36:54.644946  374798 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/config.json ...
	I0602 17:36:54.645211  374798 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 17:36:54.645257  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122-m02
	I0602 17:36:54.676454  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49522 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122-m02/id_rsa Username:docker}
	I0602 17:36:54.757644  374798 command_runner.go:130] > 22%!
	(MISSING)I0602 17:36:54.757725  374798 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 17:36:54.761748  374798 command_runner.go:130] > 227G
	I0602 17:36:54.761788  374798 start.go:134] duration metric: createHost completed in 10.419665986s
	I0602 17:36:54.761797  374798 start.go:81] releasing machines lock for "multinode-20220602173558-283122-m02", held for 10.419773753s
	I0602 17:36:54.761891  374798 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220602173558-283122-m02
	I0602 17:36:54.797959  374798 out.go:177] * Found network options:
	I0602 17:36:54.799774  374798 out.go:177]   - NO_PROXY=192.168.49.2
	W0602 17:36:54.801498  374798 proxy.go:118] fail to check proxy env: Error ip not in block
	W0602 17:36:54.801553  374798 proxy.go:118] fail to check proxy env: Error ip not in block
	I0602 17:36:54.801647  374798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 17:36:54.801701  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122-m02
	I0602 17:36:54.801749  374798 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 17:36:54.801820  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122-m02
	I0602 17:36:54.835138  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49522 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122-m02/id_rsa Username:docker}
	I0602 17:36:54.835553  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49522 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122-m02/id_rsa Username:docker}
	I0602 17:36:54.923443  374798 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 17:36:54.938156  374798 command_runner.go:130] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0602 17:36:54.938185  374798 command_runner.go:130] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0602 17:36:54.938191  374798 command_runner.go:130] > <H1>302 Moved</H1>
	I0602 17:36:54.938195  374798 command_runner.go:130] > The document has moved
	I0602 17:36:54.938203  374798 command_runner.go:130] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0602 17:36:54.938206  374798 command_runner.go:130] > </BODY></HTML>
	I0602 17:36:54.939795  374798 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0602 17:36:54.939813  374798 command_runner.go:130] > [Unit]
	I0602 17:36:54.939821  374798 command_runner.go:130] > Description=Docker Application Container Engine
	I0602 17:36:54.939826  374798 command_runner.go:130] > Documentation=https://docs.docker.com
	I0602 17:36:54.939831  374798 command_runner.go:130] > BindsTo=containerd.service
	I0602 17:36:54.939836  374798 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0602 17:36:54.939840  374798 command_runner.go:130] > Wants=network-online.target
	I0602 17:36:54.939846  374798 command_runner.go:130] > Requires=docker.socket
	I0602 17:36:54.939850  374798 command_runner.go:130] > StartLimitBurst=3
	I0602 17:36:54.939854  374798 command_runner.go:130] > StartLimitIntervalSec=60
	I0602 17:36:54.939858  374798 command_runner.go:130] > [Service]
	I0602 17:36:54.939861  374798 command_runner.go:130] > Type=notify
	I0602 17:36:54.939865  374798 command_runner.go:130] > Restart=on-failure
	I0602 17:36:54.939873  374798 command_runner.go:130] > Environment=NO_PROXY=192.168.49.2
	I0602 17:36:54.939880  374798 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0602 17:36:54.939888  374798 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0602 17:36:54.939900  374798 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0602 17:36:54.939906  374798 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0602 17:36:54.939912  374798 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0602 17:36:54.939922  374798 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0602 17:36:54.939932  374798 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0602 17:36:54.939943  374798 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0602 17:36:54.939952  374798 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0602 17:36:54.939958  374798 command_runner.go:130] > ExecStart=
	I0602 17:36:54.939974  374798 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0602 17:36:54.939984  374798 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0602 17:36:54.939994  374798 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0602 17:36:54.940003  374798 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0602 17:36:54.940011  374798 command_runner.go:130] > LimitNOFILE=infinity
	I0602 17:36:54.940015  374798 command_runner.go:130] > LimitNPROC=infinity
	I0602 17:36:54.940020  374798 command_runner.go:130] > LimitCORE=infinity
	I0602 17:36:54.940025  374798 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0602 17:36:54.940034  374798 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0602 17:36:54.940043  374798 command_runner.go:130] > TasksMax=infinity
	I0602 17:36:54.940047  374798 command_runner.go:130] > TimeoutStartSec=0
	I0602 17:36:54.940053  374798 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0602 17:36:54.940060  374798 command_runner.go:130] > Delegate=yes
	I0602 17:36:54.940065  374798 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0602 17:36:54.940072  374798 command_runner.go:130] > KillMode=process
	I0602 17:36:54.940078  374798 command_runner.go:130] > [Install]
	I0602 17:36:54.940086  374798 command_runner.go:130] > WantedBy=multi-user.target
	I0602 17:36:54.940109  374798 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 17:36:54.940154  374798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 17:36:54.950787  374798 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 17:36:54.962999  374798 command_runner.go:130] > runtime-endpoint: unix:///var/run/dockershim.sock
	I0602 17:36:54.963025  374798 command_runner.go:130] > image-endpoint: unix:///var/run/dockershim.sock
	I0602 17:36:54.963806  374798 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 17:36:55.043848  374798 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 17:36:55.124923  374798 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 17:36:55.134535  374798 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0602 17:36:55.134568  374798 command_runner.go:130] > [Unit]
	I0602 17:36:55.134576  374798 command_runner.go:130] > Description=Docker Application Container Engine
	I0602 17:36:55.134582  374798 command_runner.go:130] > Documentation=https://docs.docker.com
	I0602 17:36:55.134586  374798 command_runner.go:130] > BindsTo=containerd.service
	I0602 17:36:55.134591  374798 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0602 17:36:55.134596  374798 command_runner.go:130] > Wants=network-online.target
	I0602 17:36:55.134601  374798 command_runner.go:130] > Requires=docker.socket
	I0602 17:36:55.134606  374798 command_runner.go:130] > StartLimitBurst=3
	I0602 17:36:55.134610  374798 command_runner.go:130] > StartLimitIntervalSec=60
	I0602 17:36:55.134614  374798 command_runner.go:130] > [Service]
	I0602 17:36:55.134617  374798 command_runner.go:130] > Type=notify
	I0602 17:36:55.134622  374798 command_runner.go:130] > Restart=on-failure
	I0602 17:36:55.134626  374798 command_runner.go:130] > Environment=NO_PROXY=192.168.49.2
	I0602 17:36:55.134638  374798 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0602 17:36:55.134653  374798 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0602 17:36:55.134660  374798 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0602 17:36:55.134671  374798 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0602 17:36:55.134681  374798 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0602 17:36:55.134692  374798 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0602 17:36:55.134704  374798 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0602 17:36:55.134717  374798 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0602 17:36:55.134724  374798 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0602 17:36:55.134732  374798 command_runner.go:130] > ExecStart=
	I0602 17:36:55.134746  374798 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0602 17:36:55.134758  374798 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0602 17:36:55.134765  374798 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0602 17:36:55.134776  374798 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0602 17:36:55.134784  374798 command_runner.go:130] > LimitNOFILE=infinity
	I0602 17:36:55.134789  374798 command_runner.go:130] > LimitNPROC=infinity
	I0602 17:36:55.134801  374798 command_runner.go:130] > LimitCORE=infinity
	I0602 17:36:55.134810  374798 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0602 17:36:55.134816  374798 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0602 17:36:55.134824  374798 command_runner.go:130] > TasksMax=infinity
	I0602 17:36:55.134828  374798 command_runner.go:130] > TimeoutStartSec=0
	I0602 17:36:55.134838  374798 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0602 17:36:55.134846  374798 command_runner.go:130] > Delegate=yes
	I0602 17:36:55.134852  374798 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0602 17:36:55.134863  374798 command_runner.go:130] > KillMode=process
	I0602 17:36:55.134880  374798 command_runner.go:130] > [Install]
	I0602 17:36:55.134889  374798 command_runner.go:130] > WantedBy=multi-user.target
	I0602 17:36:55.134949  374798 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 17:36:55.214195  374798 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 17:36:55.223884  374798 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 17:36:55.263439  374798 command_runner.go:130] > 20.10.16
	I0602 17:36:55.263530  374798 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 17:36:55.301060  374798 command_runner.go:130] > 20.10.16
	I0602 17:36:55.306653  374798 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0602 17:36:55.308446  374798 out.go:177]   - env NO_PROXY=192.168.49.2
	I0602 17:36:55.310059  374798 cli_runner.go:164] Run: docker network inspect multinode-20220602173558-283122 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 17:36:55.342062  374798 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0602 17:36:55.345425  374798 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 17:36:55.355035  374798 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122 for IP: 192.168.49.3
	I0602 17:36:55.355156  374798 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
	I0602 17:36:55.355211  374798 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
	I0602 17:36:55.355228  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0602 17:36:55.355252  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0602 17:36:55.355273  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0602 17:36:55.355292  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0602 17:36:55.355356  374798 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/283122.pem (1338 bytes)
	W0602 17:36:55.355396  374798 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/283122_empty.pem, impossibly tiny 0 bytes
	I0602 17:36:55.355415  374798 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
	I0602 17:36:55.355453  374798 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
	I0602 17:36:55.355491  374798 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
	I0602 17:36:55.355525  374798 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1679 bytes)
	I0602 17:36:55.355581  374798 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem (1708 bytes)
	I0602 17:36:55.355620  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/283122.pem -> /usr/share/ca-certificates/283122.pem
	I0602 17:36:55.355638  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem -> /usr/share/ca-certificates/2831222.pem
	I0602 17:36:55.355656  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:36:55.355993  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 17:36:55.374151  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0602 17:36:55.392059  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 17:36:55.410425  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0602 17:36:55.428111  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/283122.pem --> /usr/share/ca-certificates/283122.pem (1338 bytes)
	I0602 17:36:55.446381  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem --> /usr/share/ca-certificates/2831222.pem (1708 bytes)
	I0602 17:36:55.464492  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 17:36:55.483111  374798 ssh_runner.go:195] Run: openssl version
	I0602 17:36:55.487989  374798 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0602 17:36:55.488129  374798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2831222.pem && ln -fs /usr/share/ca-certificates/2831222.pem /etc/ssl/certs/2831222.pem"
	I0602 17:36:55.495955  374798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2831222.pem
	I0602 17:36:55.499293  374798 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  2 17:19 /usr/share/ca-certificates/2831222.pem
	I0602 17:36:55.499326  374798 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:19 /usr/share/ca-certificates/2831222.pem
	I0602 17:36:55.499362  374798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2831222.pem
	I0602 17:36:55.504251  374798 command_runner.go:130] > 3ec20f2e
	I0602 17:36:55.504461  374798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2831222.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 17:36:55.512147  374798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 17:36:55.519767  374798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:36:55.522783  374798 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:36:55.522908  374798 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:36:55.522981  374798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:36:55.527832  374798 command_runner.go:130] > b5213941
	I0602 17:36:55.527927  374798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 17:36:55.535436  374798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/283122.pem && ln -fs /usr/share/ca-certificates/283122.pem /etc/ssl/certs/283122.pem"
	I0602 17:36:55.542708  374798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/283122.pem
	I0602 17:36:55.545784  374798 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  2 17:19 /usr/share/ca-certificates/283122.pem
	I0602 17:36:55.545825  374798 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:19 /usr/share/ca-certificates/283122.pem
	I0602 17:36:55.545867  374798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/283122.pem
	I0602 17:36:55.550670  374798 command_runner.go:130] > 51391683
	I0602 17:36:55.550748  374798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/283122.pem /etc/ssl/certs/51391683.0"
	I0602 17:36:55.558622  374798 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 17:36:55.639426  374798 command_runner.go:130] > cgroupfs
	I0602 17:36:55.641649  374798 cni.go:95] Creating CNI manager for ""
	I0602 17:36:55.641679  374798 cni.go:156] 2 nodes found, recommending kindnet
	I0602 17:36:55.641701  374798 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 17:36:55.641721  374798 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.3 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20220602173558-283122 NodeName:multinode-20220602173558-283122-m02 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.3 CgroupDriver:cgroupfs ClientCAFi
le:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 17:36:55.641869  374798 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "multinode-20220602173558-283122-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 17:36:55.641971  374798 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=multinode-20220602173558-283122-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:multinode-20220602173558-283122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0602 17:36:55.642030  374798 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0602 17:36:55.649308  374798 command_runner.go:130] > kubeadm
	I0602 17:36:55.649334  374798 command_runner.go:130] > kubectl
	I0602 17:36:55.649338  374798 command_runner.go:130] > kubelet
	I0602 17:36:55.649360  374798 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 17:36:55.649404  374798 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0602 17:36:55.656316  374798 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (413 bytes)
	I0602 17:36:55.669783  374798 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 17:36:55.682702  374798 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0602 17:36:55.688257  374798 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 17:36:55.698947  374798 host.go:66] Checking if "multinode-20220602173558-283122" exists ...
	I0602 17:36:55.699233  374798 config.go:178] Loaded profile config "multinode-20220602173558-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 17:36:55.699222  374798 start.go:282] JoinCluster: &{Name:multinode-20220602173558-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220602173558-283122 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 17:36:55.699317  374798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0602 17:36:55.699362  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:55.732570  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49517 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122/id_rsa Username:docker}
	I0602 17:36:55.859293  374798 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 684lf2.rh4yuqaps0jw4imj --discovery-token-ca-cert-hash sha256:63ba1911ecf093d3a1264e2d920adc95fcef1f12d9f3ed8ad760b71f9de41674 
	I0602 17:36:55.863005  374798 start.go:303] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0602 17:36:55.863063  374798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 684lf2.rh4yuqaps0jw4imj --discovery-token-ca-cert-hash sha256:63ba1911ecf093d3a1264e2d920adc95fcef1f12d9f3ed8ad760b71f9de41674 --ignore-preflight-errors=all --cri-socket /var/run/dockershim.sock --node-name=multinode-20220602173558-283122-m02"
	I0602 17:36:55.895653  374798 command_runner.go:130] > [preflight] Running pre-flight checks
	I0602 17:36:56.073363  374798 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0602 17:36:56.073388  374798 command_runner.go:130] > KERNEL_VERSION: 5.13.0-1027-gcp
	I0602 17:36:56.073395  374798 command_runner.go:130] > DOCKER_VERSION: 20.10.16
	I0602 17:36:56.073402  374798 command_runner.go:130] > DOCKER_GRAPH_DRIVER: overlay2
	I0602 17:36:56.073410  374798 command_runner.go:130] > OS: Linux
	I0602 17:36:56.073418  374798 command_runner.go:130] > CGROUPS_CPU: enabled
	I0602 17:36:56.073426  374798 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0602 17:36:56.073479  374798 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0602 17:36:56.073521  374798 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0602 17:36:56.073535  374798 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0602 17:36:56.073540  374798 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0602 17:36:56.073547  374798 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0602 17:36:56.073552  374798 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0602 17:36:56.167564  374798 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0602 17:36:56.167605  374798 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0602 17:36:56.527278  374798 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0602 17:36:56.527318  374798 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0602 17:36:56.527331  374798 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0602 17:36:56.610859  374798 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0602 17:37:02.152383  374798 command_runner.go:130] > This node has joined the cluster:
	I0602 17:37:02.152418  374798 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0602 17:37:02.152427  374798 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0602 17:37:02.152437  374798 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0602 17:37:02.155436  374798 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.13.0-1027-gcp\n", err: exit status 1
	I0602 17:37:02.155473  374798 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0602 17:37:02.155498  374798 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 684lf2.rh4yuqaps0jw4imj --discovery-token-ca-cert-hash sha256:63ba1911ecf093d3a1264e2d920adc95fcef1f12d9f3ed8ad760b71f9de41674 --ignore-preflight-errors=all --cri-socket /var/run/dockershim.sock --node-name=multinode-20220602173558-283122-m02": (6.29241467s)
	I0602 17:37:02.155522  374798 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0602 17:37:02.536064  374798 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0602 17:37:02.536113  374798 start.go:284] JoinCluster complete in 6.836888566s
	I0602 17:37:02.536128  374798 cni.go:95] Creating CNI manager for ""
	I0602 17:37:02.536135  374798 cni.go:156] 2 nodes found, recommending kindnet
	I0602 17:37:02.536212  374798 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0602 17:37:02.540818  374798 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0602 17:37:02.540847  374798 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0602 17:37:02.540857  374798 command_runner.go:130] > Device: 34h/52d	Inode: 13679887    Links: 1
	I0602 17:37:02.540867  374798 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0602 17:37:02.540874  374798 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0602 17:37:02.540883  374798 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0602 17:37:02.540891  374798 command_runner.go:130] > Change: 2022-06-01 20:34:52.693415195 +0000
	I0602 17:37:02.540897  374798 command_runner.go:130] >  Birth: -
	I0602 17:37:02.540992  374798 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0602 17:37:02.541007  374798 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0602 17:37:02.557296  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0602 17:37:02.718429  374798 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0602 17:37:02.718456  374798 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0602 17:37:02.718465  374798 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0602 17:37:02.718473  374798 command_runner.go:130] > daemonset.apps/kindnet configured
	I0602 17:37:02.718517  374798 start.go:208] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0602 17:37:02.721952  374798 out.go:177] * Verifying Kubernetes components...
	I0602 17:37:02.723546  374798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 17:37:02.746634  374798 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 17:37:02.747051  374798 kapi.go:59] client config for multinode-20220602173558-283122: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode
-20220602173558-283122/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17122e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0602 17:37:02.747389  374798 node_ready.go:35] waiting up to 6m0s for node "multinode-20220602173558-283122-m02" to be "Ready" ...
	I0602 17:37:02.747463  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:02.747474  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:02.747488  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:02.747498  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:02.750028  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:02.750061  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:02.750072  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:02.750082  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:02.750090  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:02.750098  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:02.750107  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:02 GMT
	I0602 17:37:02.750116  374798 round_trippers.go:580]     Audit-Id: 2a73fdb1-be7d-4062-8e8e-7d3820b8c0f3
	I0602 17:37:02.750242  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"564","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4204 chars]
	I0602 17:37:03.251017  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:03.251044  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:03.251053  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:03.251060  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:03.253904  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:03.253937  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:03.253948  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:03.253958  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:03.253968  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:03.253977  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:03 GMT
	I0602 17:37:03.253992  374798 round_trippers.go:580]     Audit-Id: d9aa50f4-0531-43bf-bd96-4ccd3c9bdb92
	I0602 17:37:03.254001  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:03.254146  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"564","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4204 chars]
	I0602 17:37:03.750796  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:03.750826  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:03.750836  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:03.750842  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:03.753326  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:03.753358  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:03.753370  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:03.753381  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:03.753387  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:03.753393  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:03.753401  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:03 GMT
	I0602 17:37:03.753409  374798 round_trippers.go:580]     Audit-Id: f1832d90-36b1-449c-9dfd-60c6aea763c3
	I0602 17:37:03.753638  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"564","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4204 chars]
	I0602 17:37:04.251020  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:04.251043  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:04.251053  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:04.251059  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:04.254333  374798 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0602 17:37:04.254368  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:04.254379  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:04.254388  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:04.254397  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:04.254408  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:04.254426  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:04 GMT
	I0602 17:37:04.254435  374798 round_trippers.go:580]     Audit-Id: 6477f7ac-5536-48a8-b3f5-ab3d812c1a52
	I0602 17:37:04.254543  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"564","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4204 chars]
	I0602 17:37:04.751066  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:04.751094  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:04.751103  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:04.751110  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:04.753613  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:04.753644  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:04.753656  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:04.753663  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:04.753672  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:04.753679  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:04.753700  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:04 GMT
	I0602 17:37:04.753709  374798 round_trippers.go:580]     Audit-Id: 961a8781-98bc-4fef-bfb5-1f1ee80d0ffe
	I0602 17:37:04.753837  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"564","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4204 chars]
	I0602 17:37:04.754178  374798 node_ready.go:58] node "multinode-20220602173558-283122-m02" has status "Ready":"False"
	I0602 17:37:05.251341  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:05.251369  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:05.251378  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:05.251387  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:05.254147  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:05.254177  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:05.254189  374798 round_trippers.go:580]     Audit-Id: 3067cd18-b128-4514-87b5-1ce91eaf4393
	I0602 17:37:05.254198  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:05.254203  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:05.254213  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:05.254218  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:05.254227  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:05 GMT
	I0602 17:37:05.254327  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"564","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4204 chars]
	I0602 17:37:05.750886  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:05.750910  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:05.750919  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:05.750925  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:05.753653  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:05.753681  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:05.753697  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:05.753705  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:05.753714  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:05.753721  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:05.753730  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:05 GMT
	I0602 17:37:05.753743  374798 round_trippers.go:580]     Audit-Id: f6f8f29f-dcad-4492-9ddb-7d7e6b994403
	I0602 17:37:05.753868  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"564","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4204 chars]
	I0602 17:37:06.251537  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:06.251567  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:06.251576  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:06.251582  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:06.254265  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:06.254296  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:06.254308  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:06.254317  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:06.254323  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:06 GMT
	I0602 17:37:06.254329  374798 round_trippers.go:580]     Audit-Id: b187ece0-eb04-45a6-8a27-dc7754e62d89
	I0602 17:37:06.254338  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:06.254345  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:06.254471  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"564","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4204 chars]
	I0602 17:37:06.751046  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:06.751072  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:06.751081  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:06.751087  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:06.753678  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:06.753706  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:06.753721  374798 round_trippers.go:580]     Audit-Id: 1f77e5a9-98a1-4166-84ec-f77de3ae3588
	I0602 17:37:06.753730  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:06.753739  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:06.753752  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:06.753764  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:06.753773  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:06 GMT
	I0602 17:37:06.753883  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"564","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4204 chars]
	I0602 17:37:07.251580  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:07.251608  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:07.251617  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:07.251623  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:07.253946  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:07.253969  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:07.253977  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:07.253982  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:07.253988  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:07 GMT
	I0602 17:37:07.253994  374798 round_trippers.go:580]     Audit-Id: c89f70f1-5199-4b61-a207-af61788eed44
	I0602 17:37:07.253999  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:07.254006  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:07.254123  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"564","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4204 chars]
	I0602 17:37:07.254415  374798 node_ready.go:58] node "multinode-20220602173558-283122-m02" has status "Ready":"False"
	I0602 17:37:07.751785  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:07.751811  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:07.751821  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:07.751834  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:07.754282  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:07.754306  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:07.754314  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:07 GMT
	I0602 17:37:07.754319  374798 round_trippers.go:580]     Audit-Id: 91687ef8-4653-4d35-86ab-e2e798308e98
	I0602 17:37:07.754325  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:07.754330  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:07.754335  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:07.754340  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:07.754452  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"564","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4204 chars]
	I0602 17:37:08.251558  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:08.251589  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.251598  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.251604  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.254063  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:08.254096  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.254107  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.254115  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.254124  374798 round_trippers.go:580]     Audit-Id: 771634dd-db16-45c7-b519-2ec9ff11035c
	I0602 17:37:08.254133  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.254149  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.254158  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.254268  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"576","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4239 chars]
	I0602 17:37:08.254566  374798 node_ready.go:49] node "multinode-20220602173558-283122-m02" has status "Ready":"True"
	I0602 17:37:08.254587  374798 node_ready.go:38] duration metric: took 5.507178903s waiting for node "multinode-20220602173558-283122-m02" to be "Ready" ...
	I0602 17:37:08.254596  374798 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 17:37:08.254651  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0602 17:37:08.254659  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.254667  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.254673  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.257871  374798 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0602 17:37:08.257894  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.257902  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.257908  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.257914  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.257922  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.257931  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.257953  374798 round_trippers.go:580]     Audit-Id: 5252500e-bac6-4306-8e54-9be10b2b09fb
	I0602 17:37:08.258612  374798 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"576"},"items":[{"metadata":{"name":"coredns-64897985d-l5jxv","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"d796da5e-d4e3-4761-84e2-c742ea94211a","resourceVersion":"504","creationTimestamp":"2022-06-02T17:36:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"20656fc2-6eb0-4735-8ce1-d983b9816977","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20656fc2-6eb0-4735-8ce1-d983b9816977\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:a
rgs":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{}," [truncated 69085 chars]
	I0602 17:37:08.260759  374798 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-l5jxv" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:08.260831  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-64897985d-l5jxv
	I0602 17:37:08.260851  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.260859  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.260868  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.262676  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:37:08.262710  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.262719  374798 round_trippers.go:580]     Audit-Id: 00ebb1ff-f41a-4ada-a895-89d1c8e57e9f
	I0602 17:37:08.262733  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.262746  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.262758  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.262767  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.262777  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.262889  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-64897985d-l5jxv","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"d796da5e-d4e3-4761-84e2-c742ea94211a","resourceVersion":"504","creationTimestamp":"2022-06-02T17:36:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"20656fc2-6eb0-4735-8ce1-d983b9816977","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20656fc2-6eb0-4735-8ce1-d983b9816977\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:live
nessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path" [truncated 5979 chars]
	I0602 17:37:08.263334  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:37:08.263348  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.263356  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.263366  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.265272  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:37:08.265300  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.265310  374798 round_trippers.go:580]     Audit-Id: 9e4e36f3-f3e0-4fef-b8d7-eb6655537896
	I0602 17:37:08.265319  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.265326  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.265338  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.265351  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.265361  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.265459  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"516","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5073 chars]
	I0602 17:37:08.265739  374798 pod_ready.go:92] pod "coredns-64897985d-l5jxv" in "kube-system" namespace has status "Ready":"True"
	I0602 17:37:08.265751  374798 pod_ready.go:81] duration metric: took 4.967827ms waiting for pod "coredns-64897985d-l5jxv" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:08.265759  374798 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:08.265807  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20220602173558-283122
	I0602 17:37:08.265815  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.265822  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.265831  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.267593  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:37:08.267612  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.267619  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.267625  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.267630  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.267635  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.267640  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.267645  374798 round_trippers.go:580]     Audit-Id: 62a2831b-ba43-4391-8da8-5c49083402e7
	I0602 17:37:08.267829  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220602173558-283122","namespace":"kube-system","uid":"2de3dc57-6748-4c20-bf65-3b2cbd2f8a0f","resourceVersion":"331","creationTimestamp":"2022-06-02T17:36:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"de8e89a254ff48460c714894f4297613","kubernetes.io/config.mirror":"de8e89a254ff48460c714894f4297613","kubernetes.io/config.seen":"2022-06-02T17:36:20.948851860Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:21Z","fieldsType":"FieldsV1","
fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes. [truncated 5804 chars]
	I0602 17:37:08.268331  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:37:08.268354  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.268368  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.268382  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.270115  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:37:08.270133  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.270140  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.270146  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.270154  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.270162  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.270176  374798 round_trippers.go:580]     Audit-Id: 5b5cd77d-859c-4405-93fc-976fb6dc8159
	I0602 17:37:08.270187  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.270380  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"516","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5073 chars]
	I0602 17:37:08.270770  374798 pod_ready.go:92] pod "etcd-multinode-20220602173558-283122" in "kube-system" namespace has status "Ready":"True"
	I0602 17:37:08.270790  374798 pod_ready.go:81] duration metric: took 5.020258ms waiting for pod "etcd-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:08.270811  374798 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:08.270872  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220602173558-283122
	I0602 17:37:08.270885  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.270898  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.270912  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.272630  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:37:08.272648  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.272655  374798 round_trippers.go:580]     Audit-Id: 99a652a3-ed30-4ccd-9cab-300b5ff35f5c
	I0602 17:37:08.272661  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.272669  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.272677  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.272691  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.272712  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.272889  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220602173558-283122","namespace":"kube-system","uid":"999b4342-ad8c-46aa-a5a0-bdd14089e393","resourceVersion":"326","creationTimestamp":"2022-06-02T17:36:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"9b88133a9e18b6ec7d53499c3c2debcc","kubernetes.io/config.mirror":"9b88133a9e18b6ec7d53499c3c2debcc","kubernetes.io/config.seen":"2022-06-02T17:36:13.934512571Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:20Z
","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{". [truncated 8313 chars]
	I0602 17:37:08.273451  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:37:08.273471  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.273484  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.273495  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.275046  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:37:08.275065  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.275074  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.275081  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.275090  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.275101  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.275113  374798 round_trippers.go:580]     Audit-Id: d8d2e0ce-89ee-45c6-b490-38b68d32be77
	I0602 17:37:08.275123  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.275213  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"516","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5073 chars]
	I0602 17:37:08.275484  374798 pod_ready.go:92] pod "kube-apiserver-multinode-20220602173558-283122" in "kube-system" namespace has status "Ready":"True"
	I0602 17:37:08.275498  374798 pod_ready.go:81] duration metric: took 4.673098ms waiting for pod "kube-apiserver-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:08.275509  374798 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:08.275555  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220602173558-283122
	I0602 17:37:08.275565  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.275576  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.275590  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.277350  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:37:08.277372  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.277382  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.277392  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.277405  374798 round_trippers.go:580]     Audit-Id: fb417878-9737-4b4f-9ddb-7d407bef73f0
	I0602 17:37:08.277466  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.277484  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.277493  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.277595  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220602173558-283122","namespace":"kube-system","uid":"dc0ed8b1-4d22-46e1-a708-fee470e6c6fe","resourceVersion":"330","creationTimestamp":"2022-06-02T17:36:21Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"73385622a1da8f9c02cb3f38e98edff7","kubernetes.io/config.mirror":"73385622a1da8f9c02cb3f38e98edff7","kubernetes.io/config.seen":"2022-06-02T17:36:20.948871153Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":
{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror [truncated 7888 chars]
	I0602 17:37:08.278001  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:37:08.278016  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.278023  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.278029  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.279558  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:37:08.279579  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.279590  374798 round_trippers.go:580]     Audit-Id: d3be00fb-a90e-4034-95bf-55c181c6813c
	I0602 17:37:08.279600  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.279610  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.279628  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.279643  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.279672  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.279770  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"516","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5073 chars]
	I0602 17:37:08.280068  374798 pod_ready.go:92] pod "kube-controller-manager-multinode-20220602173558-283122" in "kube-system" namespace has status "Ready":"True"
	I0602 17:37:08.280082  374798 pod_ready.go:81] duration metric: took 4.565089ms waiting for pod "kube-controller-manager-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:08.280091  374798 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kz8ts" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:08.452506  374798 request.go:533] Waited for 172.33901ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kz8ts
	I0602 17:37:08.452567  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kz8ts
	I0602 17:37:08.452572  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.452581  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.452588  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.455087  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:08.455111  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.455120  374798 round_trippers.go:580]     Audit-Id: 1c750384-0ad7-48b3-9326-a7f03f9e7f7a
	I0602 17:37:08.455128  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.455137  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.455145  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.455154  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.455162  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.455272  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kz8ts","generateName":"kube-proxy-","namespace":"kube-system","uid":"37731fc3-69e5-4170-a1d9-d3878e1acf0a","resourceVersion":"561","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"controller-revision-hash":"549f7469d9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fdf29778-5a78-41df-8413-a6f3417a1d56","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fdf29778-5a78-41df-8413-a6f3417a1d56\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5552 chars]
	I0602 17:37:08.652128  374798 request.go:533] Waited for 196.382187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:08.652198  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:08.652203  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.652211  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.652221  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.654564  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:08.654590  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.654601  374798 round_trippers.go:580]     Audit-Id: 19ae874e-ecc8-4c3c-8e08-152474f44846
	I0602 17:37:08.654610  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.654619  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.654625  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.654633  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.654647  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.654738  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"576","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4239 chars]
	I0602 17:37:08.655063  374798 pod_ready.go:92] pod "kube-proxy-kz8ts" in "kube-system" namespace has status "Ready":"True"
	I0602 17:37:08.655080  374798 pod_ready.go:81] duration metric: took 374.982729ms waiting for pod "kube-proxy-kz8ts" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:08.655089  374798 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q8c4p" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:08.852537  374798 request.go:533] Waited for 197.354151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8c4p
	I0602 17:37:08.852598  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8c4p
	I0602 17:37:08.852603  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.852612  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.852619  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.855212  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:08.855235  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.855243  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.855249  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.855254  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.855259  374798 round_trippers.go:580]     Audit-Id: c632b147-e045-4e43-8f9f-1a9c8f40210c
	I0602 17:37:08.855265  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.855273  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.855424  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q8c4p","generateName":"kube-proxy-","namespace":"kube-system","uid":"f1878b35-b1dd-4c80-b1c8-6848ceeac02c","resourceVersion":"475","creationTimestamp":"2022-06-02T17:36:33Z","labels":{"controller-revision-hash":"549f7469d9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fdf29778-5a78-41df-8413-a6f3417a1d56","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fdf29778-5a78-41df-8413-a6f3417a1d56\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5544 chars]
	I0602 17:37:09.052282  374798 request.go:533] Waited for 196.364931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:37:09.052343  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:37:09.052348  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:09.052357  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:09.052367  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:09.054922  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:09.054952  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:09.054964  374798 round_trippers.go:580]     Audit-Id: 8c28b2a8-71af-4e3a-b39d-232a0bbe6016
	I0602 17:37:09.054973  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:09.054981  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:09.054990  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:09.055004  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:09.055017  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:09 GMT
	I0602 17:37:09.055156  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"516","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5073 chars]
	I0602 17:37:09.055495  374798 pod_ready.go:92] pod "kube-proxy-q8c4p" in "kube-system" namespace has status "Ready":"True"
	I0602 17:37:09.055511  374798 pod_ready.go:81] duration metric: took 400.416351ms waiting for pod "kube-proxy-q8c4p" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:09.055520  374798 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:09.251962  374798 request.go:533] Waited for 196.341866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220602173558-283122
	I0602 17:37:09.252024  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220602173558-283122
	I0602 17:37:09.252029  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:09.252038  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:09.252044  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:09.254674  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:09.254716  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:09.254725  374798 round_trippers.go:580]     Audit-Id: 09506928-08c1-435b-96c8-43d8f84b9678
	I0602 17:37:09.254731  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:09.254736  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:09.254742  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:09.254747  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:09.254753  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:09 GMT
	I0602 17:37:09.254861  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220602173558-283122","namespace":"kube-system","uid":"b207e4b1-a64d-4aaf-bd7b-5eaec8e23004","resourceVersion":"366","creationTimestamp":"2022-06-02T17:36:21Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"16c53ed8f606fa43c821fa27956bef6a","kubernetes.io/config.mirror":"16c53ed8f606fa43c821fa27956bef6a","kubernetes.io/config.seen":"2022-06-02T17:36:20.948872903Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kuberne
tes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes [truncated 4770 chars]
	I0602 17:37:09.452625  374798 request.go:533] Waited for 197.345309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:37:09.452761  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:37:09.452769  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:09.452782  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:09.452804  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:09.455086  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:09.455109  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:09.455120  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:09.455126  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:09 GMT
	I0602 17:37:09.455131  374798 round_trippers.go:580]     Audit-Id: 4a9b01e3-7d08-4f5a-84f5-ee376bad623f
	I0602 17:37:09.455137  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:09.455142  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:09.455149  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:09.455313  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"516","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5073 chars]
	I0602 17:37:09.455810  374798 pod_ready.go:92] pod "kube-scheduler-multinode-20220602173558-283122" in "kube-system" namespace has status "Ready":"True"
	I0602 17:37:09.455829  374798 pod_ready.go:81] duration metric: took 400.302223ms waiting for pod "kube-scheduler-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:09.455847  374798 pod_ready.go:38] duration metric: took 1.201231114s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 17:37:09.455877  374798 system_svc.go:44] waiting for kubelet service to be running ....
	I0602 17:37:09.455925  374798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 17:37:09.466312  374798 system_svc.go:56] duration metric: took 10.426082ms WaitForService to wait for kubelet.
	I0602 17:37:09.466346  374798 kubeadm.go:572] duration metric: took 6.747792517s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0602 17:37:09.466374  374798 node_conditions.go:102] verifying NodePressure condition ...
	I0602 17:37:09.651734  374798 request.go:533] Waited for 185.265956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0602 17:37:09.651801  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0602 17:37:09.651806  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:09.651814  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:09.651821  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:09.654233  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:09.654265  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:09.654276  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:09 GMT
	I0602 17:37:09.654285  374798 round_trippers.go:580]     Audit-Id: 3a441877-2ef4-4935-89a9-9553ffd6f9b2
	I0602 17:37:09.654294  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:09.654304  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:09.654314  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:09.654320  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:09.654442  374798 request.go:1073] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"578"},"items":[{"metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"516","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"
0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"ma [truncated 10357 chars]
	I0602 17:37:09.654898  374798 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0602 17:37:09.654914  374798 node_conditions.go:123] node cpu capacity is 8
	I0602 17:37:09.654924  374798 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0602 17:37:09.654927  374798 node_conditions.go:123] node cpu capacity is 8
	I0602 17:37:09.654931  374798 node_conditions.go:105] duration metric: took 188.551949ms to run NodePressure ...
	I0602 17:37:09.654949  374798 start.go:213] waiting for startup goroutines ...
	I0602 17:37:09.693503  374798 start.go:504] kubectl: 1.24.1, cluster: 1.23.6 (minor skew: 1)
	I0602 17:37:09.697481  374798 out.go:177] * Done! kubectl is now configured to use "multinode-20220602173558-283122" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-06-02 17:36:06 UTC, end at Thu 2022-06-02 17:43:15 UTC. --
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[254]: time="2022-06-02T17:36:08.472332999Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 02 17:36:08 multinode-20220602173558-283122 systemd[1]: docker.service: Succeeded.
	Jun 02 17:36:08 multinode-20220602173558-283122 systemd[1]: Stopped Docker Application Container Engine.
	Jun 02 17:36:08 multinode-20220602173558-283122 systemd[1]: Starting Docker Application Container Engine...
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.516267231Z" level=info msg="Starting up"
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.518288581Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.518316302Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.518339313Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.518349035Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.520225053Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.520253774Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.520273528Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.520287190Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.526492223Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.530989753Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.531014520Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.531019923Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.531169527Z" level=info msg="Loading containers: start."
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.612687282Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.647617431Z" level=info msg="Loading containers: done."
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.658821038Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.658887309Z" level=info msg="Daemon has completed initialization"
	Jun 02 17:36:08 multinode-20220602173558-283122 systemd[1]: Started Docker Application Container Engine.
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.676600703Z" level=info msg="API listen on [::]:2376"
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.679907817Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	d39c455a82b66       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   6 minutes ago       Running             busybox                   0                   cff79d290c693
	e54a6202cdf67       a4ca41631cc7a                                                                                         6 minutes ago       Running             coredns                   0                   51fda5ea13233
	1a82c523cdd37       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner       0                   de0a268075989
	9ab48ef578bc9       kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c              6 minutes ago       Running             kindnet-cni               0                   68604fb5b9b16
	0612b1336d0e9       4c03754524064                                                                                         6 minutes ago       Running             kube-proxy                0                   8b348b20026be
	fe7106c1507c2       df7b72818ad2e                                                                                         7 minutes ago       Running             kube-controller-manager   0                   6dd325a1bbbf2
	905dd1d3f937c       25f8c7f3da61c                                                                                         7 minutes ago       Running             etcd                      0                   503ddb8d4e075
	9c2cc422d2c1b       8fa62c12256df                                                                                         7 minutes ago       Running             kube-apiserver            0                   8d21b199ac971
	b537ad213767e       595f327f224a4                                                                                         7 minutes ago       Running             kube-scheduler            0                   e82a0aa83c92e
	
	* 
	* ==> coredns [e54a6202cdf6] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-20220602173558-283122
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20220602173558-283122
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae
	                    minikube.k8s.io/name=multinode-20220602173558-283122
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_02T17_36_22_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Jun 2022 17:36:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20220602173558-283122
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Jun 2022 17:43:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Jun 2022 17:42:26 +0000   Thu, 02 Jun 2022 17:36:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Jun 2022 17:42:26 +0000   Thu, 02 Jun 2022 17:36:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Jun 2022 17:42:26 +0000   Thu, 02 Jun 2022 17:36:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Jun 2022 17:42:26 +0000   Thu, 02 Jun 2022 17:36:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    multinode-20220602173558-283122
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873816Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873816Ki
	  pods:               110
	System Info:
	  Machine ID:                 a34bb2508bce429bb90502b0ef044420
	  System UUID:                53d269d0-8245-41e8-ac6d-0e3bcce49ad2
	  Boot ID:                    eac629ea-39e3-4b75-b891-94bd750a4fe6
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7978565885-2cv69                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 coredns-64897985d-l5jxv                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     6m42s
	  kube-system                 etcd-multinode-20220602173558-283122                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         6m54s
	  kube-system                 kindnet-d4jwl                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m42s
	  kube-system                 kube-apiserver-multinode-20220602173558-283122             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	  kube-system                 kube-controller-manager-multinode-20220602173558-283122    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m54s
	  kube-system                 kube-proxy-q8c4p                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m42s
	  kube-system                 kube-scheduler-multinode-20220602173558-283122             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m54s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 6m41s                kube-proxy  
	  Normal  Starting                 7m2s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m1s (x5 over 7m1s)  kubelet     Node multinode-20220602173558-283122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m1s (x5 over 7m1s)  kubelet     Node multinode-20220602173558-283122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m1s (x4 over 7m1s)  kubelet     Node multinode-20220602173558-283122 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m1s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 6m55s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m54s                kubelet     Node multinode-20220602173558-283122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m54s                kubelet     Node multinode-20220602173558-283122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m54s                kubelet     Node multinode-20220602173558-283122 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m54s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m34s                kubelet     Node multinode-20220602173558-283122 status is now: NodeReady
	
	
	Name:               multinode-20220602173558-283122-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20220602173558-283122-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Jun 2022 17:36:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20220602173558-283122-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Jun 2022 17:43:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Jun 2022 17:42:34 +0000   Thu, 02 Jun 2022 17:36:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Jun 2022 17:42:34 +0000   Thu, 02 Jun 2022 17:36:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Jun 2022 17:42:34 +0000   Thu, 02 Jun 2022 17:36:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Jun 2022 17:42:34 +0000   Thu, 02 Jun 2022 17:37:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    multinode-20220602173558-283122-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873816Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873816Ki
	  pods:               110
	System Info:
	  Machine ID:                 a34bb2508bce429bb90502b0ef044420
	  System UUID:                33a7e218-bf68-456c-9c38-8eaf0595363e
	  Boot ID:                    eac629ea-39e3-4b75-b891-94bd750a4fe6
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7978565885-tq8p2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kindnet-dkv9b               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m18s
	  kube-system                 kube-proxy-kz8ts            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m16s                  kube-proxy  
	  Normal  Starting                 6m18s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m18s (x2 over 6m18s)  kubelet     Node multinode-20220602173558-283122-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m18s (x2 over 6m18s)  kubelet     Node multinode-20220602173558-283122-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m18s (x2 over 6m18s)  kubelet     Node multinode-20220602173558-283122-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m18s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m8s                   kubelet     Node multinode-20220602173558-283122-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	[  +5.312798] IPv4: martian source 10.244.0.133 from 10.244.1.2, on dev br-e61542ad8e34
	[  +0.000006] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	[  +5.002993] IPv4: martian source 10.244.0.133 from 10.244.1.2, on dev br-e61542ad8e34
	[  +0.000007] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	[  +5.002282] IPv4: martian source 10.244.0.133 from 10.244.1.2, on dev br-e61542ad8e34
	[  +0.000006] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	[  +5.004382] IPv4: martian source 10.244.0.133 from 10.244.1.2, on dev br-e61542ad8e34
	[  +0.000006] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	[  +5.003939] IPv4: martian source 10.244.0.133 from 10.244.1.2, on dev br-e61542ad8e34
	[  +0.000006] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	[  +5.003467] IPv4: martian source 10.244.0.133 from 10.244.1.2, on dev br-e61542ad8e34
	[  +0.000007] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	[  +5.003574] IPv4: martian source 10.244.0.133 from 10.244.1.2, on dev br-e61542ad8e34
	[  +0.000007] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	[  +5.004761] IPv4: martian source 10.244.0.133 from 10.244.1.2, on dev br-e61542ad8e34
	[  +0.000007] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	[  +5.002246] IPv4: martian source 10.244.0.133 from 10.244.1.2, on dev br-e61542ad8e34
	[  +0.000006] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	[  +5.004730] IPv4: martian source 10.244.0.133 from 10.244.1.2, on dev br-e61542ad8e34
	[  +0.000006] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	[Jun 2 17:43] IPv4: martian source 10.244.0.133 from 10.244.1.2, on dev br-e61542ad8e34
	[  +0.000006] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	[  +5.004058] IPv4: martian source 10.244.0.133 from 10.244.1.2, on dev br-e61542ad8e34
	[  +0.000006] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	
	* 
	* ==> etcd [905dd1d3f937] <==
	* {"level":"info","ts":"2022-06-02T17:36:15.248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-06-02T17:36:15.249Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-06-02T17:36:15.254Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-02T17:36:15.255Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-02T17:36:15.255Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-02T17:36:15.255Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-02T17:36:15.255Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-02T17:36:15.641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-02T17:36:15.641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-02T17:36:15.641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-06-02T17:36:15.641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-06-02T17:36:15.641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-02T17:36:15.641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-06-02T17:36:15.641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-02T17:36:15.641Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:36:15.642Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:36:15.642Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:36:15.642Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:36:15.643Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:multinode-20220602173558-283122 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-02T17:36:15.643Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T17:36:15.643Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T17:36:15.643Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-02T17:36:15.643Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-02T17:36:15.644Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-02T17:36:15.644Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	
	* 
	* ==> kernel <==
	*  17:43:16 up  2:25,  0 users,  load average: 0.36, 0.61, 0.76
	Linux multinode-20220602173558-283122 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [9c2cc422d2c1] <==
	* I0602 17:36:17.934019       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0602 17:36:17.934078       1 cache.go:39] Caches are synced for autoregister controller
	I0602 17:36:17.934115       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0602 17:36:17.934094       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0602 17:36:17.934420       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0602 17:36:17.934519       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0602 17:36:18.792442       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0602 17:36:18.792476       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0602 17:36:18.798623       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0602 17:36:18.801835       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0602 17:36:18.801857       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0602 17:36:19.185881       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0602 17:36:19.214578       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0602 17:36:19.361853       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0602 17:36:19.366878       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0602 17:36:19.367911       1 controller.go:611] quota admission added evaluator for: endpoints
	I0602 17:36:19.371843       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0602 17:36:19.917469       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0602 17:36:20.782557       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0602 17:36:20.789092       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0602 17:36:20.801697       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0602 17:36:33.523296       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0602 17:36:33.623777       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0602 17:36:33.623777       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0602 17:36:34.682189       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [fe7106c1507c] <==
	* I0602 17:36:32.915128       1 shared_informer.go:247] Caches are synced for job 
	I0602 17:36:32.921098       1 shared_informer.go:247] Caches are synced for cronjob 
	I0602 17:36:32.973722       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 17:36:32.975515       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 17:36:33.398678       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 17:36:33.469355       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 17:36:33.469385       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0602 17:36:33.525483       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0602 17:36:33.539367       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0602 17:36:33.629403       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q8c4p"
	I0602 17:36:33.631456       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-d4jwl"
	I0602 17:36:33.775146       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-dqpcc"
	I0602 17:36:33.782710       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-l5jxv"
	I0602 17:36:33.803528       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-dqpcc"
	I0602 17:36:42.721991       1 node_lifecycle_controller.go:1190] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	W0602 17:36:57.589057       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20220602173558-283122-m02" does not exist
	I0602 17:36:57.594847       1 range_allocator.go:374] Set node multinode-20220602173558-283122-m02 PodCIDR to [10.244.1.0/24]
	I0602 17:36:57.598752       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-kz8ts"
	I0602 17:36:57.598779       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-dkv9b"
	W0602 17:36:57.724114       1 node_lifecycle_controller.go:1012] Missing timestamp for Node multinode-20220602173558-283122-m02. Assuming now as a timestamp.
	I0602 17:36:57.724149       1 event.go:294] "Event occurred" object="multinode-20220602173558-283122-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20220602173558-283122-m02 event: Registered Node multinode-20220602173558-283122-m02 in Controller"
	I0602 17:37:10.535885       1 event.go:294] "Event occurred" object="default/busybox" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7978565885 to 2"
	I0602 17:37:10.541255       1 event.go:294] "Event occurred" object="default/busybox-7978565885" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7978565885-tq8p2"
	I0602 17:37:10.544352       1 event.go:294] "Event occurred" object="default/busybox-7978565885" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7978565885-2cv69"
	I0602 17:37:12.733343       1 event.go:294] "Event occurred" object="default/busybox-7978565885-tq8p2" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7978565885-tq8p2"
	
	* 
	* ==> kube-proxy [0612b1336d0e] <==
	* I0602 17:36:34.655981       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0602 17:36:34.656062       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0602 17:36:34.656099       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 17:36:34.678720       1 server_others.go:206] "Using iptables Proxier"
	I0602 17:36:34.678758       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 17:36:34.678766       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 17:36:34.678784       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 17:36:34.679241       1 server.go:656] "Version info" version="v1.23.6"
	I0602 17:36:34.679850       1 config.go:317] "Starting service config controller"
	I0602 17:36:34.679850       1 config.go:226] "Starting endpoint slice config controller"
	I0602 17:36:34.679890       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 17:36:34.679890       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 17:36:34.780156       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0602 17:36:34.780174       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [b537ad213767] <==
	* W0602 17:36:17.854407       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0602 17:36:17.854478       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0602 17:36:17.854780       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0602 17:36:17.854992       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0602 17:36:17.855049       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0602 17:36:17.854797       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0602 17:36:17.855090       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0602 17:36:17.854808       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0602 17:36:17.855110       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0602 17:36:17.854913       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0602 17:36:17.855136       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0602 17:36:17.854926       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0602 17:36:17.855150       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0602 17:36:17.854971       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0602 17:36:17.855170       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0602 17:36:17.855029       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0602 17:36:18.771151       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0602 17:36:18.771199       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0602 17:36:18.792668       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0602 17:36:18.792708       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0602 17:36:18.979071       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0602 17:36:18.979110       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0602 17:36:19.038351       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0602 17:36:19.038394       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0602 17:36:21.451696       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-06-02 17:36:06 UTC, end at Thu 2022-06-02 17:43:16 UTC. --
	Jun 02 17:36:32 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:32.840622    1929 docker_service.go:364] "Docker cri received runtime config" runtimeConfig="&RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Jun 02 17:36:32 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:32.840812    1929 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 02 17:36:32 multinode-20220602173558-283122 kubelet[1929]: E0602 17:36:32.848474    1929 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Jun 02 17:36:33 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:33.633961    1929 topology_manager.go:200] "Topology Admit Handler"
	Jun 02 17:36:33 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:33.637381    1929 topology_manager.go:200] "Topology Admit Handler"
	Jun 02 17:36:33 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:33.648555    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1878b35-b1dd-4c80-b1c8-6848ceeac02c-xtables-lock\") pod \"kube-proxy-q8c4p\" (UID: \"f1878b35-b1dd-4c80-b1c8-6848ceeac02c\") " pod="kube-system/kube-proxy-q8c4p"
	Jun 02 17:36:33 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:33.648599    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1878b35-b1dd-4c80-b1c8-6848ceeac02c-lib-modules\") pod \"kube-proxy-q8c4p\" (UID: \"f1878b35-b1dd-4c80-b1c8-6848ceeac02c\") " pod="kube-system/kube-proxy-q8c4p"
	Jun 02 17:36:33 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:33.648625    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbmkl\" (UniqueName: \"kubernetes.io/projected/f1878b35-b1dd-4c80-b1c8-6848ceeac02c-kube-api-access-zbmkl\") pod \"kube-proxy-q8c4p\" (UID: \"f1878b35-b1dd-4c80-b1c8-6848ceeac02c\") " pod="kube-system/kube-proxy-q8c4p"
	Jun 02 17:36:33 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:33.648647    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/02c02672-9134-4bb8-abdb-c15c1f3334ac-cni-cfg\") pod \"kindnet-d4jwl\" (UID: \"02c02672-9134-4bb8-abdb-c15c1f3334ac\") " pod="kube-system/kindnet-d4jwl"
	Jun 02 17:36:33 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:33.648665    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f1878b35-b1dd-4c80-b1c8-6848ceeac02c-kube-proxy\") pod \"kube-proxy-q8c4p\" (UID: \"f1878b35-b1dd-4c80-b1c8-6848ceeac02c\") " pod="kube-system/kube-proxy-q8c4p"
	Jun 02 17:36:33 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:33.648710    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf7kl\" (UniqueName: \"kubernetes.io/projected/02c02672-9134-4bb8-abdb-c15c1f3334ac-kube-api-access-rf7kl\") pod \"kindnet-d4jwl\" (UID: \"02c02672-9134-4bb8-abdb-c15c1f3334ac\") " pod="kube-system/kindnet-d4jwl"
	Jun 02 17:36:33 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:33.648793    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02c02672-9134-4bb8-abdb-c15c1f3334ac-xtables-lock\") pod \"kindnet-d4jwl\" (UID: \"02c02672-9134-4bb8-abdb-c15c1f3334ac\") " pod="kube-system/kindnet-d4jwl"
	Jun 02 17:36:33 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:33.648836    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02c02672-9134-4bb8-abdb-c15c1f3334ac-lib-modules\") pod \"kindnet-d4jwl\" (UID: \"02c02672-9134-4bb8-abdb-c15c1f3334ac\") " pod="kube-system/kindnet-d4jwl"
	Jun 02 17:36:34 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:34.278505    1929 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="8b348b20026befa3c552f31c8ab97fdc1e10e628c4b4e41c29231be6ada84e9f"
	Jun 02 17:36:34 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:34.558284    1929 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="68604fb5b9b164523ba623d98dd04fda3282413a13be677be9364b38ba1a32d0"
	Jun 02 17:36:35 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:35.933977    1929 cni.go:240] "Unable to update cni config" err="no networks found in /etc/cni/net.mk"
	Jun 02 17:36:36 multinode-20220602173558-283122 kubelet[1929]: E0602 17:36:36.344191    1929 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Jun 02 17:36:41 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:41.542745    1929 topology_manager.go:200] "Topology Admit Handler"
	Jun 02 17:36:41 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:41.543112    1929 topology_manager.go:200] "Topology Admit Handler"
	Jun 02 17:36:41 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:41.593727    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d796da5e-d4e3-4761-84e2-c742ea94211a-config-volume\") pod \"coredns-64897985d-l5jxv\" (UID: \"d796da5e-d4e3-4761-84e2-c742ea94211a\") " pod="kube-system/coredns-64897985d-l5jxv"
	Jun 02 17:36:41 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:41.593788    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8974k\" (UniqueName: \"kubernetes.io/projected/d796da5e-d4e3-4761-84e2-c742ea94211a-kube-api-access-8974k\") pod \"coredns-64897985d-l5jxv\" (UID: \"d796da5e-d4e3-4761-84e2-c742ea94211a\") " pod="kube-system/coredns-64897985d-l5jxv"
	Jun 02 17:36:41 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:41.593906    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8a59fe44-28d0-431f-9a48-d4f0705d7d5a-tmp\") pod \"storage-provisioner\" (UID: \"8a59fe44-28d0-431f-9a48-d4f0705d7d5a\") " pod="kube-system/storage-provisioner"
	Jun 02 17:36:41 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:41.593963    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqf5l\" (UniqueName: \"kubernetes.io/projected/8a59fe44-28d0-431f-9a48-d4f0705d7d5a-kube-api-access-wqf5l\") pod \"storage-provisioner\" (UID: \"8a59fe44-28d0-431f-9a48-d4f0705d7d5a\") " pod="kube-system/storage-provisioner"
	Jun 02 17:37:10 multinode-20220602173558-283122 kubelet[1929]: I0602 17:37:10.552847    1929 topology_manager.go:200] "Topology Admit Handler"
	Jun 02 17:37:10 multinode-20220602173558-283122 kubelet[1929]: I0602 17:37:10.575008    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zms5\" (UniqueName: \"kubernetes.io/projected/6c6aa109-c4c8-4606-b0a2-e2321da8d918-kube-api-access-7zms5\") pod \"busybox-7978565885-2cv69\" (UID: \"6c6aa109-c4c8-4606-b0a2-e2321da8d918\") " pod="default/busybox-7978565885-2cv69"
	
	* 
	* ==> storage-provisioner [1a82c523cdd3] <==
	* I0602 17:36:42.170493       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0602 17:36:42.179486       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0602 17:36:42.179533       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0602 17:36:42.237252       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0602 17:36:42.237454       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20220602173558-283122_88142c70-60d4-455b-abae-84f21c293b56!
	I0602 17:36:42.237453       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"87375484-2cf5-4b00-ab06-bb73cfde4992", APIVersion:"v1", ResourceVersion:"497", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20220602173558-283122_88142c70-60d4-455b-abae-84f21c293b56 became leader
	I0602 17:36:42.338004       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20220602173558-283122_88142c70-60d4-455b-abae-84f21c293b56!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-20220602173558-283122 -n multinode-20220602173558-283122
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-20220602173558-283122 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestMultiNode/serial/DeployApp2Nodes]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context multinode-20220602173558-283122 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context multinode-20220602173558-283122 describe pod : exit status 1 (46.589122ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context multinode-20220602173558-283122 describe pod : exit status 1
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (366.76s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (123.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- exec busybox-7978565885-2cv69 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0602 17:43:55.531746  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
multinode_test.go:546: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- exec busybox-7978565885-2cv69 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (1m0.242811516s)
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- exec busybox-7978565885-2cv69 -- sh -c "ping -c 1 <nil>"
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- exec busybox-7978565885-2cv69 -- sh -c "ping -c 1 <nil>": exit status 2 (191.714606ms)

                                                
                                                
** stderr ** 
	sh: syntax error: unexpected end of file
	command terminated with exit code 2

                                                
                                                
** /stderr **
multinode_test.go:555: Failed to ping host (<nil>) from pod (busybox-7978565885-2cv69): exit status 2
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- exec busybox-7978565885-tq8p2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0602 17:44:33.161246  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
multinode_test.go:546: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- exec busybox-7978565885-tq8p2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": (1m0.259257078s)
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- exec busybox-7978565885-tq8p2 -- sh -c "ping -c 1 <nil>"
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-20220602173558-283122 -- exec busybox-7978565885-tq8p2 -- sh -c "ping -c 1 <nil>": exit status 2 (187.54417ms)

                                                
                                                
** stderr ** 
	sh: syntax error: unexpected end of file
	command terminated with exit code 2

                                                
                                                
** /stderr **
multinode_test.go:555: Failed to ping host (<nil>) from pod (busybox-7978565885-tq8p2): exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220602173558-283122
helpers_test.go:235: (dbg) docker inspect multinode-20220602173558-283122:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "96c9bda474fd4a427ecb8f5b9f99b269e586546e76098bc8dfc3d3b9a5f8cd9a",
	        "Created": "2022-06-02T17:36:06.204587009Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 375463,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-06-02T17:36:06.557905139Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/96c9bda474fd4a427ecb8f5b9f99b269e586546e76098bc8dfc3d3b9a5f8cd9a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/96c9bda474fd4a427ecb8f5b9f99b269e586546e76098bc8dfc3d3b9a5f8cd9a/hostname",
	        "HostsPath": "/var/lib/docker/containers/96c9bda474fd4a427ecb8f5b9f99b269e586546e76098bc8dfc3d3b9a5f8cd9a/hosts",
	        "LogPath": "/var/lib/docker/containers/96c9bda474fd4a427ecb8f5b9f99b269e586546e76098bc8dfc3d3b9a5f8cd9a/96c9bda474fd4a427ecb8f5b9f99b269e586546e76098bc8dfc3d3b9a5f8cd9a-json.log",
	        "Name": "/multinode-20220602173558-283122",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-20220602173558-283122:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-20220602173558-283122",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7a6d0a94f3e52abf75f98ca4ad59837768bc78ac8576764715d5c8ae531a7346-init/diff:/var/lib/docker/overlay2/9517781e97ae29bbe1cf38381a601e806342524da1c5d5dc15ba54ec17f204b6/diff:/var/lib/docker/overlay2/28e14d8724bd82b9b6adde55b66e6d1f1be4fa6c3379f2f44ca1138dd648175a/diff:/var/lib/docker/overlay2/24589dfb9ddc941e7cb22b5e38dec69d1af0a29a330d344152ca7f26010354a8/diff:/var/lib/docker/overlay2/66c45d6273a3507b6e0b6b0753ec4fc82f07160fd115626e24a42bda27bbba86/diff:/var/lib/docker/overlay2/a6f0c661bd178a2a0fef034493a114067e8952b8516e69aff1e1e94a75918bbc/diff:/var/lib/docker/overlay2/5b40a045506372be84bfb01d05b068bc4cc926c5eb9e868226c4c386e8a331c7/diff:/var/lib/docker/overlay2/dbf9a9c6e59628984c95ea93ff4ada66c53b22ca3c2f910f70efba895cf40718/diff:/var/lib/docker/overlay2/5eea619393ee989a35a37a13b0633685af91729e1b74f6e73fd94b63c9d7d1fa/diff:/var/lib/docker/overlay2/24de6fbece5386c3658fa3e2c01723e6c1c24fb837d53b3d5a8c75b64ce2cd06/diff:/var/lib/docker/overlay2/04593e
f7d3bf72ee8b7439b0326af5b9818930e0dd48c35dba97e3b4ae39f4ee/diff:/var/lib/docker/overlay2/09310697cecb557085bb4d5dfc810a82fc533759ad98179263b2fc94153a64fa/diff:/var/lib/docker/overlay2/ee75c420f0615a19a5d44cfe042bdca5a2cf0f1c541168ba424140cf2aa7f98d/diff:/var/lib/docker/overlay2/fc8643a3e2060ab0365cd1227ab9ebbd5a573e6ebf8379bc6b64a756a1f66562/diff:/var/lib/docker/overlay2/0f8571cab87b35114fab495cbde9af8ac7e58be99412da82a79c90239e2227e1/diff:/var/lib/docker/overlay2/e1a5ae079e4f566081e5e786955257c23809c8719ea6b4593c8c91ac00380379/diff:/var/lib/docker/overlay2/373565e63dec12c85906e80b4a030f7d852992a756749be9424ac17802d589c0/diff:/var/lib/docker/overlay2/b886b42aa5e9f06b4381a6f58d55338774a2d02af2e91e3de156aa4bca4e4be0/diff:/var/lib/docker/overlay2/dee534a744ce54d08ca752b7f64e1772c063d4ba1e008c5a9773188bb009c377/diff:/var/lib/docker/overlay2/c8cf6667f29ffc5417675e4bd8eee6a91468de13292e3cb6ae2c70d3ad56d5fd/diff:/var/lib/docker/overlay2/072b57a4228c3928bdb27162809b102b485bb94c5f726c9a3e9892267ed30839/diff:/var/lib/d
ocker/overlay2/491d5b51bf6de1cfc4eef288c454f54e3b424e3b83fd9dc216c35891c7923f55/diff:/var/lib/docker/overlay2/78f167c2cc286e4058ef09d5a719437afddde854aac41b8d054567865fa61024/diff:/var/lib/docker/overlay2/10763b3ba26983ed9426ccca08bfceb7037d95c69827847e2ff755b4f2c5a4ab/diff:/var/lib/docker/overlay2/e559098efb21164d96bd0aee50fee9f48310df68887c5c884f4ba3f3fc895260/diff:/var/lib/docker/overlay2/555ef1505fe2b1ab4244f073e9212f269d70d8cd6ca0cd517d21cc80cd25b4c1/diff:/var/lib/docker/overlay2/0201da7c907a1f8f564cda48c3f3a403e484d901739d2584f1343a7907685c5e/diff:/var/lib/docker/overlay2/0abae5d84f8497a5114349421415431c43a34015ad98f6c9c3140143d2f1f383/diff:/var/lib/docker/overlay2/80c37982eef2dc00bd08f208551df377570134af84008156f6a500855fd2ba10/diff:/var/lib/docker/overlay2/cc38481bb11be97cd54c0032709337a491580c2f0ae6d3f7eec4c3616f8e3126/diff:/var/lib/docker/overlay2/dda978aeb4c7317bafbc077229c4417a4d8e7658d9604803c401448b471eab5e/diff:/var/lib/docker/overlay2/7c230ae0970689e2932cdaadc9e3f86ec8215fbf384c9eb59cfb958daf7
3197e/diff:/var/lib/docker/overlay2/145db51b169211e46c3bbab5ca998b2a019a4a813801acc2dae9b2dc09bc2ecc/diff:/var/lib/docker/overlay2/e604163672a013549c59bc042d1b932a3dad9e104d7079fbbb3ca24c29351c0d/diff:/var/lib/docker/overlay2/5a304d8680fd3fb594754173a70646923e2715ed21c98b8c43e1f1ef2d7abd6a/diff:/var/lib/docker/overlay2/b77f00351ddd70be2f3d9dd622b78e1a2937c4ba71585b357fef88d285eb762e/diff:/var/lib/docker/overlay2/196b6ae042b9b08748253c16f5e2b56d7f9a25077008d3d447cbc75447ed5e2a/diff:/var/lib/docker/overlay2/8c7c3fbb0bed8f7440d11104257806c70d16fd7d432311fc208d5012a439e5a1/diff:/var/lib/docker/overlay2/10604aa3b8ed5698c9fa8f55e00b9a9dc27d6b99135255d6c756fc080e615fc6/diff:/var/lib/docker/overlay2/c6602a17e9b0ab1d84297109204b48e60f293952fbc1feaf362895ec86860ead/diff:/var/lib/docker/overlay2/3828c5e50db9be2b4bc30fce040392ea735da5eaa819ad0848cb4fb79fc367da/diff:/var/lib/docker/overlay2/708816f20892c0e4b94cbe8e80b169fff54ffd870bcde395bd8163dd03b25d0f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7a6d0a94f3e52abf75f98ca4ad59837768bc78ac8576764715d5c8ae531a7346/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7a6d0a94f3e52abf75f98ca4ad59837768bc78ac8576764715d5c8ae531a7346/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7a6d0a94f3e52abf75f98ca4ad59837768bc78ac8576764715d5c8ae531a7346/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-20220602173558-283122",
	                "Source": "/var/lib/docker/volumes/multinode-20220602173558-283122/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-20220602173558-283122",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-20220602173558-283122",
	                "name.minikube.sigs.k8s.io": "multinode-20220602173558-283122",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bd2760c40398b26c7be4b20c43947c0aeb2d1a01a36b100eaad470b4a325cc7f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49517"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49516"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49513"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49515"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49514"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/bd2760c40398",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-20220602173558-283122": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "96c9bda474fd",
	                        "multinode-20220602173558-283122"
	                    ],
	                    "NetworkID": "e61542ad8e34e7ff09d21a0a5fccca185a192533dcb4a9237f61a24204efb552",
	                    "EndpointID": "9d7d50fd4febb23cbd280da5982ab665e982dee27604ea1e794d9acf3c76af9b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-20220602173558-283122 -n multinode-20220602173558-283122
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220602173558-283122 logs -n 25: (1.259627066s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                       Args                        |               Profile               |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------|---------|----------------|---------------------|---------------------|
	| profile | list -ojson                                       | first-20220602173437-283122         | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	| profile | second-20220602173437-283122                      | first-20220602173437-283122         | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	| profile | list -ojson                                       | second-20220602173437-283122        | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	| delete  | -p                                                | second-20220602173437-283122        | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	|         | second-20220602173437-283122                      |                                     |         |                |                     |                     |
	| delete  | -p first-20220602173437-283122                    | first-20220602173437-283122         | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	| start   | -p                                                | mount-start-1-20220602173533-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	|         | mount-start-1-20220602173533-283122               |                                     |         |                |                     |                     |
	|         | --memory=2048 --mount                             |                                     |         |                |                     |                     |
	|         | --mount-gid 0 --mount-msize 6543                  |                                     |         |                |                     |                     |
	|         | --mount-port 46464 --mount-uid 0                  |                                     |         |                |                     |                     |
	|         | --no-kubernetes --driver=docker                   |                                     |         |                |                     |                     |
	|         | --container-runtime=docker                        |                                     |         |                |                     |                     |
	| ssh     | mount-start-1-20220602173533-283122               | mount-start-1-20220602173533-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	|         | ssh -- ls /minikube-host                          |                                     |         |                |                     |                     |
	| start   | -p                                                | mount-start-2-20220602173533-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	|         | mount-start-2-20220602173533-283122               |                                     |         |                |                     |                     |
	|         | --memory=2048 --mount                             |                                     |         |                |                     |                     |
	|         | --mount-gid 0 --mount-msize 6543                  |                                     |         |                |                     |                     |
	|         | --mount-port 46465 --mount-uid 0                  |                                     |         |                |                     |                     |
	|         | --no-kubernetes --driver=docker                   |                                     |         |                |                     |                     |
	|         | --container-runtime=docker                        |                                     |         |                |                     |                     |
	| ssh     | mount-start-2-20220602173533-283122               | mount-start-2-20220602173533-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	|         | ssh -- ls /minikube-host                          |                                     |         |                |                     |                     |
	| delete  | -p                                                | mount-start-1-20220602173533-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	|         | mount-start-1-20220602173533-283122               |                                     |         |                |                     |                     |
	|         | --alsologtostderr -v=5                            |                                     |         |                |                     |                     |
	| ssh     | mount-start-2-20220602173533-283122               | mount-start-2-20220602173533-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	|         | ssh -- ls /minikube-host                          |                                     |         |                |                     |                     |
	| stop    | -p                                                | mount-start-2-20220602173533-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	|         | mount-start-2-20220602173533-283122               |                                     |         |                |                     |                     |
	| start   | -p                                                | mount-start-2-20220602173533-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	|         | mount-start-2-20220602173533-283122               |                                     |         |                |                     |                     |
	| ssh     | mount-start-2-20220602173533-283122               | mount-start-2-20220602173533-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	|         | ssh -- ls /minikube-host                          |                                     |         |                |                     |                     |
	| delete  | -p                                                | mount-start-2-20220602173533-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	|         | mount-start-2-20220602173533-283122               |                                     |         |                |                     |                     |
	| delete  | -p                                                | mount-start-1-20220602173533-283122 | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:35 UTC |
	|         | mount-start-1-20220602173533-283122               |                                     |         |                |                     |                     |
	| start   | -p                                                | multinode-20220602173558-283122     | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:35 UTC | 02 Jun 22 17:37 UTC |
	|         | multinode-20220602173558-283122                   |                                     |         |                |                     |                     |
	|         | --wait=true --memory=2200                         |                                     |         |                |                     |                     |
	|         | --nodes=2 -v=8                                    |                                     |         |                |                     |                     |
	|         | --alsologtostderr                                 |                                     |         |                |                     |                     |
	|         | --driver=docker                                   |                                     |         |                |                     |                     |
	|         | --container-runtime=docker                        |                                     |         |                |                     |                     |
	| kubectl | -p multinode-20220602173558-283122 -- apply -f    | multinode-20220602173558-283122     | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 UTC | 02 Jun 22 17:37 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                                     |         |                |                     |                     |
	| kubectl | -p                                                | multinode-20220602173558-283122     | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 UTC | 02 Jun 22 17:37 UTC |
	|         | multinode-20220602173558-283122                   |                                     |         |                |                     |                     |
	|         | -- rollout status                                 |                                     |         |                |                     |                     |
	|         | deployment/busybox                                |                                     |         |                |                     |                     |
	| kubectl | -p multinode-20220602173558-283122                | multinode-20220602173558-283122     | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 UTC | 02 Jun 22 17:37 UTC |
	|         | -- get pods -o                                    |                                     |         |                |                     |                     |
	|         | jsonpath='{.items[*].status.podIP}'               |                                     |         |                |                     |                     |
	| kubectl | -p multinode-20220602173558-283122                | multinode-20220602173558-283122     | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:37 UTC | 02 Jun 22 17:37 UTC |
	|         | -- get pods -o                                    |                                     |         |                |                     |                     |
	|         | jsonpath='{.items[*].metadata.name}'              |                                     |         |                |                     |                     |
	| logs    | multinode-20220602173558-283122                   | multinode-20220602173558-283122     | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:43 UTC | 02 Jun 22 17:43 UTC |
	|         | logs -n 25                                        |                                     |         |                |                     |                     |
	| kubectl | -p multinode-20220602173558-283122                | multinode-20220602173558-283122     | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:43 UTC | 02 Jun 22 17:43 UTC |
	|         | -- get pods -o                                    |                                     |         |                |                     |                     |
	|         | jsonpath='{.items[*].metadata.name}'              |                                     |         |                |                     |                     |
	| kubectl | -p                                                | multinode-20220602173558-283122     | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:43 UTC | 02 Jun 22 17:44 UTC |
	|         | multinode-20220602173558-283122                   |                                     |         |                |                     |                     |
	|         | -- exec                                           |                                     |         |                |                     |                     |
	|         | busybox-7978565885-2cv69                          |                                     |         |                |                     |                     |
	|         | -- sh -c nslookup                                 |                                     |         |                |                     |                     |
	|         | host.minikube.internal | awk                      |                                     |         |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                                     |         |                |                     |                     |
	| kubectl | -p                                                | multinode-20220602173558-283122     | jenkins | v1.26.0-beta.1 | 02 Jun 22 17:44 UTC | 02 Jun 22 17:45 UTC |
	|         | multinode-20220602173558-283122                   |                                     |         |                |                     |                     |
	|         | -- exec                                           |                                     |         |                |                     |                     |
	|         | busybox-7978565885-tq8p2                          |                                     |         |                |                     |                     |
	|         | -- sh -c nslookup                                 |                                     |         |                |                     |                     |
	|         | host.minikube.internal | awk                      |                                     |         |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                                     |         |                |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 17:35:58
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 17:35:58.233087  374798 out.go:296] Setting OutFile to fd 1 ...
	I0602 17:35:58.233233  374798 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:35:58.233244  374798 out.go:309] Setting ErrFile to fd 2...
	I0602 17:35:58.233248  374798 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:35:58.233368  374798 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 17:35:58.233666  374798 out.go:303] Setting JSON to false
	I0602 17:35:58.235411  374798 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8312,"bootTime":1654183047,"procs":1232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0602 17:35:58.235494  374798 start.go:125] virtualization: kvm guest
	I0602 17:35:58.238406  374798 out.go:177] * [multinode-20220602173558-283122] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0602 17:35:58.240054  374798 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 17:35:58.239963  374798 notify.go:193] Checking for updates...
	I0602 17:35:58.241681  374798 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 17:35:58.243471  374798 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 17:35:58.244929  374798 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 17:35:58.246343  374798 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0602 17:35:58.248083  374798 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 17:35:58.289688  374798 docker.go:137] docker version: linux-20.10.16
	I0602 17:35:58.289836  374798 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 17:35:58.394930  374798 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:71 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:34 SystemTime:2022-06-02 17:35:58.319122951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 17:35:58.395049  374798 docker.go:254] overlay module found
	I0602 17:35:58.397296  374798 out.go:177] * Using the docker driver based on user configuration
	I0602 17:35:58.398781  374798 start.go:284] selected driver: docker
	I0602 17:35:58.398802  374798 start.go:806] validating driver "docker" against <nil>
	I0602 17:35:58.398826  374798 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 17:35:58.399679  374798 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 17:35:58.502623  374798 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:71 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:34 SystemTime:2022-06-02 17:35:58.427707256 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 17:35:58.502766  374798 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0602 17:35:58.502961  374798 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0602 17:35:58.505354  374798 out.go:177] * Using Docker driver with the root privilege
	I0602 17:35:58.506734  374798 cni.go:95] Creating CNI manager for ""
	I0602 17:35:58.506759  374798 cni.go:156] 0 nodes found, recommending kindnet
	I0602 17:35:58.506797  374798 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0602 17:35:58.506810  374798 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0602 17:35:58.506815  374798 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0602 17:35:58.506842  374798 start_flags.go:306] config:
	{Name:multinode-20220602173558-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220602173558-283122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 17:35:58.508595  374798 out.go:177] * Starting control plane node multinode-20220602173558-283122 in cluster multinode-20220602173558-283122
	I0602 17:35:58.509982  374798 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 17:35:58.511418  374798 out.go:177] * Pulling base image ...
	I0602 17:35:58.512742  374798 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 17:35:58.512796  374798 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 17:35:58.512813  374798 cache.go:57] Caching tarball of preloaded images
	I0602 17:35:58.512833  374798 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 17:35:58.513068  374798 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 17:35:58.513087  374798 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 17:35:58.513431  374798 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/config.json ...
	I0602 17:35:58.513462  374798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/config.json: {Name:mkcac6351aa666483dc218cf03023a9ea6d2bae0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:35:58.561417  374798 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 17:35:58.561457  374798 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 17:35:58.561475  374798 cache.go:206] Successfully downloaded all kic artifacts
	I0602 17:35:58.561527  374798 start.go:352] acquiring machines lock for multinode-20220602173558-283122: {Name:mkd1d7ce0a0491c5601a577f4da4ed2fb2774cda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 17:35:58.561685  374798 start.go:356] acquired machines lock for "multinode-20220602173558-283122" in 133.214µs
	I0602 17:35:58.561726  374798 start.go:91] Provisioning new machine with config: &{Name:multinode-20220602173558-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220602173558-283122 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 17:35:58.561814  374798 start.go:131] createHost starting for "" (driver="docker")
	I0602 17:35:58.565214  374798 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0602 17:35:58.565465  374798 start.go:165] libmachine.API.Create for "multinode-20220602173558-283122" (driver="docker")
	I0602 17:35:58.565501  374798 client.go:168] LocalClient.Create starting
	I0602 17:35:58.565568  374798 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem
	I0602 17:35:58.565599  374798 main.go:134] libmachine: Decoding PEM data...
	I0602 17:35:58.565618  374798 main.go:134] libmachine: Parsing certificate...
	I0602 17:35:58.565673  374798 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem
	I0602 17:35:58.565695  374798 main.go:134] libmachine: Decoding PEM data...
	I0602 17:35:58.565707  374798 main.go:134] libmachine: Parsing certificate...
	I0602 17:35:58.566006  374798 cli_runner.go:164] Run: docker network inspect multinode-20220602173558-283122 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0602 17:35:58.598357  374798 cli_runner.go:211] docker network inspect multinode-20220602173558-283122 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0602 17:35:58.598454  374798 network_create.go:272] running [docker network inspect multinode-20220602173558-283122] to gather additional debugging logs...
	I0602 17:35:58.598484  374798 cli_runner.go:164] Run: docker network inspect multinode-20220602173558-283122
	W0602 17:35:58.628937  374798 cli_runner.go:211] docker network inspect multinode-20220602173558-283122 returned with exit code 1
	I0602 17:35:58.628974  374798 network_create.go:275] error running [docker network inspect multinode-20220602173558-283122]: docker network inspect multinode-20220602173558-283122: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-20220602173558-283122
	I0602 17:35:58.628992  374798 network_create.go:277] output of [docker network inspect multinode-20220602173558-283122]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-20220602173558-283122
	
	** /stderr **
	I0602 17:35:58.629069  374798 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 17:35:58.661197  374798 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000484100] misses:0}
	I0602 17:35:58.661260  374798 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 17:35:58.661280  374798 network_create.go:115] attempt to create docker network multinode-20220602173558-283122 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0602 17:35:58.661328  374798 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true multinode-20220602173558-283122
	I0602 17:35:58.728468  374798 network_create.go:99] docker network multinode-20220602173558-283122 192.168.49.0/24 created
	I0602 17:35:58.728506  374798 kic.go:106] calculated static IP "192.168.49.2" for the "multinode-20220602173558-283122" container
	I0602 17:35:58.728580  374798 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0602 17:35:58.759443  374798 cli_runner.go:164] Run: docker volume create multinode-20220602173558-283122 --label name.minikube.sigs.k8s.io=multinode-20220602173558-283122 --label created_by.minikube.sigs.k8s.io=true
	I0602 17:35:58.793059  374798 oci.go:103] Successfully created a docker volume multinode-20220602173558-283122
	I0602 17:35:58.793149  374798 cli_runner.go:164] Run: docker run --rm --name multinode-20220602173558-283122-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20220602173558-283122 --entrypoint /usr/bin/test -v multinode-20220602173558-283122:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib
	I0602 17:35:59.370089  374798 oci.go:107] Successfully prepared a docker volume multinode-20220602173558-283122
	I0602 17:35:59.370233  374798 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 17:35:59.370260  374798 kic.go:179] Starting extracting preloaded images to volume ...
	I0602 17:35:59.370328  374798 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20220602173558-283122:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir
	I0602 17:36:06.069247  374798 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20220602173558-283122:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir: (6.698834574s)
	I0602 17:36:06.069294  374798 kic.go:188] duration metric: took 6.699023 seconds to extract preloaded images to volume
	W0602 17:36:06.069440  374798 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0602 17:36:06.069559  374798 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0602 17:36:06.174267  374798 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-20220602173558-283122 --name multinode-20220602173558-283122 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20220602173558-283122 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-20220602173558-283122 --network multinode-20220602173558-283122 --ip 192.168.49.2 --volume multinode-20220602173558-283122:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496
	I0602 17:36:06.567220  374798 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122 --format={{.State.Running}}
	I0602 17:36:06.602477  374798 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122 --format={{.State.Status}}
	I0602 17:36:06.634965  374798 cli_runner.go:164] Run: docker exec multinode-20220602173558-283122 stat /var/lib/dpkg/alternatives/iptables
	I0602 17:36:06.696098  374798 oci.go:247] the created container "multinode-20220602173558-283122" has a running status.
	I0602 17:36:06.696130  374798 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122/id_rsa...
	I0602 17:36:06.762806  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0602 17:36:06.762854  374798 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0602 17:36:06.850067  374798 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122 --format={{.State.Status}}
	I0602 17:36:06.883321  374798 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0602 17:36:06.883351  374798 kic_runner.go:114] Args: [docker exec --privileged multinode-20220602173558-283122 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0602 17:36:06.976450  374798 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122 --format={{.State.Status}}
	I0602 17:36:07.010291  374798 machine.go:88] provisioning docker machine ...
	I0602 17:36:07.010342  374798 ubuntu.go:169] provisioning hostname "multinode-20220602173558-283122"
	I0602 17:36:07.010419  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:07.045968  374798 main.go:134] libmachine: Using SSH client type: native
	I0602 17:36:07.046178  374798 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49517 <nil> <nil>}
	I0602 17:36:07.046202  374798 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-20220602173558-283122 && echo "multinode-20220602173558-283122" | sudo tee /etc/hostname
	I0602 17:36:07.169951  374798 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-20220602173558-283122
	
	I0602 17:36:07.170059  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:07.203134  374798 main.go:134] libmachine: Using SSH client type: native
	I0602 17:36:07.203290  374798 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49517 <nil> <nil>}
	I0602 17:36:07.203312  374798 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20220602173558-283122' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20220602173558-283122/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20220602173558-283122' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 17:36:07.316909  374798 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 17:36:07.316944  374798 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem
ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 17:36:07.316975  374798 ubuntu.go:177] setting up certificates
	I0602 17:36:07.316989  374798 provision.go:83] configureAuth start
	I0602 17:36:07.317076  374798 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220602173558-283122
	I0602 17:36:07.348231  374798 provision.go:138] copyHostCerts
	I0602 17:36:07.348271  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 17:36:07.348326  374798 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
	I0602 17:36:07.348344  374798 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 17:36:07.348404  374798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 17:36:07.348481  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 17:36:07.348502  374798 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
	I0602 17:36:07.348513  374798 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 17:36:07.348540  374798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 17:36:07.348591  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 17:36:07.348605  374798 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
	I0602 17:36:07.348609  374798 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 17:36:07.348631  374798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1679 bytes)
	I0602 17:36:07.348682  374798 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.multinode-20220602173558-283122 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-20220602173558-283122]
	I0602 17:36:07.515688  374798 provision.go:172] copyRemoteCerts
	I0602 17:36:07.515763  374798 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 17:36:07.515799  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:07.547011  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49517 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122/id_rsa Username:docker}
	I0602 17:36:07.636708  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0602 17:36:07.636785  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 17:36:07.654835  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0602 17:36:07.654898  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0602 17:36:07.672470  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0602 17:36:07.672542  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0602 17:36:07.690031  374798 provision.go:86] duration metric: configureAuth took 373.018186ms
	I0602 17:36:07.690062  374798 ubuntu.go:193] setting minikube options for container-runtime
	I0602 17:36:07.690288  374798 config.go:178] Loaded profile config "multinode-20220602173558-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 17:36:07.690352  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:07.722440  374798 main.go:134] libmachine: Using SSH client type: native
	I0602 17:36:07.722619  374798 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49517 <nil> <nil>}
	I0602 17:36:07.722640  374798 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 17:36:07.841371  374798 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 17:36:07.841406  374798 ubuntu.go:71] root file system type: overlay
	I0602 17:36:07.841587  374798 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 17:36:07.841662  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:07.872720  374798 main.go:134] libmachine: Using SSH client type: native
	I0602 17:36:07.872875  374798 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49517 <nil> <nil>}
	I0602 17:36:07.872935  374798 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 17:36:07.997841  374798 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 17:36:07.997928  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:08.029422  374798 main.go:134] libmachine: Using SSH client type: native
	I0602 17:36:08.029581  374798 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49517 <nil> <nil>}
	I0602 17:36:08.029601  374798 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 17:36:08.671832  374798 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-02 17:36:07.991668796 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0602 17:36:08.671863  374798 machine.go:91] provisioned docker machine in 1.661540122s
	I0602 17:36:08.671875  374798 client.go:171] LocalClient.Create took 10.106368546s
	I0602 17:36:08.671895  374798 start.go:173] duration metric: libmachine.API.Create for "multinode-20220602173558-283122" took 10.106423566s
	I0602 17:36:08.671903  374798 start.go:306] post-start starting for "multinode-20220602173558-283122" (driver="docker")
	I0602 17:36:08.671909  374798 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 17:36:08.671976  374798 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 17:36:08.672066  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:08.703241  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49517 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122/id_rsa Username:docker}
	I0602 17:36:08.788744  374798 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 17:36:08.791529  374798 command_runner.go:130] > NAME="Ubuntu"
	I0602 17:36:08.791554  374798 command_runner.go:130] > VERSION="20.04.4 LTS (Focal Fossa)"
	I0602 17:36:08.791561  374798 command_runner.go:130] > ID=ubuntu
	I0602 17:36:08.791569  374798 command_runner.go:130] > ID_LIKE=debian
	I0602 17:36:08.791576  374798 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.4 LTS"
	I0602 17:36:08.791583  374798 command_runner.go:130] > VERSION_ID="20.04"
	I0602 17:36:08.791592  374798 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0602 17:36:08.791600  374798 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0602 17:36:08.791607  374798 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0602 17:36:08.791617  374798 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0602 17:36:08.791621  374798 command_runner.go:130] > VERSION_CODENAME=focal
	I0602 17:36:08.791627  374798 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0602 17:36:08.791702  374798 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 17:36:08.791719  374798 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 17:36:08.791733  374798 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 17:36:08.791740  374798 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 17:36:08.791753  374798 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 17:36:08.791819  374798 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 17:36:08.791883  374798 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem -> 2831222.pem in /etc/ssl/certs
	I0602 17:36:08.791898  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem -> /etc/ssl/certs/2831222.pem
	I0602 17:36:08.791968  374798 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 17:36:08.799118  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem --> /etc/ssl/certs/2831222.pem (1708 bytes)
	I0602 17:36:08.816845  374798 start.go:309] post-start completed in 144.926263ms
	I0602 17:36:08.817254  374798 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220602173558-283122
	I0602 17:36:08.848069  374798 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/config.json ...
	I0602 17:36:08.848322  374798 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 17:36:08.848366  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:08.879030  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49517 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122/id_rsa Username:docker}
	I0602 17:36:08.961953  374798 command_runner.go:130] > 22%!
	(MISSING)I0602 17:36:08.962040  374798 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 17:36:08.965860  374798 command_runner.go:130] > 229G
	I0602 17:36:08.966032  374798 start.go:134] duration metric: createHost completed in 10.404205446s
	I0602 17:36:08.966053  374798 start.go:81] releasing machines lock for "multinode-20220602173558-283122", held for 10.40435218s
	I0602 17:36:08.966138  374798 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220602173558-283122
	I0602 17:36:08.998274  374798 ssh_runner.go:195] Run: systemctl --version
	I0602 17:36:08.998341  374798 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 17:36:08.998397  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:08.998347  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:09.033185  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49517 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122/id_rsa Username:docker}
	I0602 17:36:09.034883  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49517 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122/id_rsa Username:docker}
	I0602 17:36:09.117353  374798 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.17)
	I0602 17:36:09.117400  374798 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0602 17:36:09.117496  374798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 17:36:09.135989  374798 command_runner.go:130] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0602 17:36:09.136018  374798 command_runner.go:130] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0602 17:36:09.136024  374798 command_runner.go:130] > <H1>302 Moved</H1>
	I0602 17:36:09.136028  374798 command_runner.go:130] > The document has moved
	I0602 17:36:09.136033  374798 command_runner.go:130] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0602 17:36:09.136038  374798 command_runner.go:130] > </BODY></HTML>
	I0602 17:36:09.136166  374798 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 17:36:09.144977  374798 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0602 17:36:09.145177  374798 command_runner.go:130] > [Unit]
	I0602 17:36:09.145207  374798 command_runner.go:130] > Description=Docker Application Container Engine
	I0602 17:36:09.145221  374798 command_runner.go:130] > Documentation=https://docs.docker.com
	I0602 17:36:09.145228  374798 command_runner.go:130] > BindsTo=containerd.service
	I0602 17:36:09.145237  374798 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0602 17:36:09.145243  374798 command_runner.go:130] > Wants=network-online.target
	I0602 17:36:09.145269  374798 command_runner.go:130] > Requires=docker.socket
	I0602 17:36:09.145280  374798 command_runner.go:130] > StartLimitBurst=3
	I0602 17:36:09.145286  374798 command_runner.go:130] > StartLimitIntervalSec=60
	I0602 17:36:09.145295  374798 command_runner.go:130] > [Service]
	I0602 17:36:09.145300  374798 command_runner.go:130] > Type=notify
	I0602 17:36:09.145306  374798 command_runner.go:130] > Restart=on-failure
	I0602 17:36:09.145321  374798 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0602 17:36:09.145339  374798 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0602 17:36:09.145354  374798 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0602 17:36:09.145371  374798 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0602 17:36:09.145381  374798 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0602 17:36:09.145388  374798 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0602 17:36:09.145395  374798 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0602 17:36:09.145403  374798 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0602 17:36:09.145414  374798 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0602 17:36:09.145422  374798 command_runner.go:130] > ExecStart=
	I0602 17:36:09.145436  374798 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0602 17:36:09.145446  374798 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0602 17:36:09.145457  374798 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0602 17:36:09.145467  374798 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0602 17:36:09.145472  374798 command_runner.go:130] > LimitNOFILE=infinity
	I0602 17:36:09.145477  374798 command_runner.go:130] > LimitNPROC=infinity
	I0602 17:36:09.145481  374798 command_runner.go:130] > LimitCORE=infinity
	I0602 17:36:09.145490  374798 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0602 17:36:09.145495  374798 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0602 17:36:09.145498  374798 command_runner.go:130] > TasksMax=infinity
	I0602 17:36:09.145502  374798 command_runner.go:130] > TimeoutStartSec=0
	I0602 17:36:09.145508  374798 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0602 17:36:09.145515  374798 command_runner.go:130] > Delegate=yes
	I0602 17:36:09.145522  374798 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0602 17:36:09.145529  374798 command_runner.go:130] > KillMode=process
	I0602 17:36:09.145534  374798 command_runner.go:130] > [Install]
	I0602 17:36:09.145541  374798 command_runner.go:130] > WantedBy=multi-user.target
	I0602 17:36:09.145865  374798 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 17:36:09.145922  374798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 17:36:09.155138  374798 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 17:36:09.167940  374798 command_runner.go:130] > runtime-endpoint: unix:///var/run/dockershim.sock
	I0602 17:36:09.167968  374798 command_runner.go:130] > image-endpoint: unix:///var/run/dockershim.sock
	I0602 17:36:09.168031  374798 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 17:36:09.244950  374798 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 17:36:09.320702  374798 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 17:36:09.329629  374798 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0602 17:36:09.329656  374798 command_runner.go:130] > [Unit]
	I0602 17:36:09.329672  374798 command_runner.go:130] > Description=Docker Application Container Engine
	I0602 17:36:09.329692  374798 command_runner.go:130] > Documentation=https://docs.docker.com
	I0602 17:36:09.329701  374798 command_runner.go:130] > BindsTo=containerd.service
	I0602 17:36:09.329711  374798 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0602 17:36:09.329722  374798 command_runner.go:130] > Wants=network-online.target
	I0602 17:36:09.329734  374798 command_runner.go:130] > Requires=docker.socket
	I0602 17:36:09.329748  374798 command_runner.go:130] > StartLimitBurst=3
	I0602 17:36:09.329760  374798 command_runner.go:130] > StartLimitIntervalSec=60
	I0602 17:36:09.329766  374798 command_runner.go:130] > [Service]
	I0602 17:36:09.329776  374798 command_runner.go:130] > Type=notify
	I0602 17:36:09.329783  374798 command_runner.go:130] > Restart=on-failure
	I0602 17:36:09.329801  374798 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0602 17:36:09.329817  374798 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0602 17:36:09.329831  374798 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0602 17:36:09.329846  374798 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0602 17:36:09.329860  374798 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0602 17:36:09.329875  374798 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0602 17:36:09.329891  374798 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0602 17:36:09.329907  374798 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0602 17:36:09.329917  374798 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0602 17:36:09.329927  374798 command_runner.go:130] > ExecStart=
	I0602 17:36:09.329949  374798 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0602 17:36:09.329962  374798 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0602 17:36:09.329977  374798 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0602 17:36:09.329991  374798 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0602 17:36:09.330001  374798 command_runner.go:130] > LimitNOFILE=infinity
	I0602 17:36:09.330006  374798 command_runner.go:130] > LimitNPROC=infinity
	I0602 17:36:09.330013  374798 command_runner.go:130] > LimitCORE=infinity
	I0602 17:36:09.330028  374798 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0602 17:36:09.330051  374798 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0602 17:36:09.330062  374798 command_runner.go:130] > TasksMax=infinity
	I0602 17:36:09.330071  374798 command_runner.go:130] > TimeoutStartSec=0
	I0602 17:36:09.330078  374798 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0602 17:36:09.330086  374798 command_runner.go:130] > Delegate=yes
	I0602 17:36:09.330091  374798 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0602 17:36:09.330098  374798 command_runner.go:130] > KillMode=process
	I0602 17:36:09.330102  374798 command_runner.go:130] > [Install]
	I0602 17:36:09.330109  374798 command_runner.go:130] > WantedBy=multi-user.target
	I0602 17:36:09.330447  374798 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 17:36:09.408708  374798 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 17:36:09.418325  374798 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 17:36:09.454791  374798 command_runner.go:130] > 20.10.16
	I0602 17:36:09.457132  374798 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 17:36:09.494008  374798 command_runner.go:130] > 20.10.16
	I0602 17:36:09.499102  374798 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0602 17:36:09.499192  374798 cli_runner.go:164] Run: docker network inspect multinode-20220602173558-283122 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 17:36:09.530539  374798 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0602 17:36:09.533912  374798 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 17:36:09.545513  374798 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0602 17:36:09.546970  374798 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 17:36:09.547036  374798 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 17:36:09.577182  374798 command_runner.go:130] > k8s.gcr.io/kube-apiserver:v1.23.6
	I0602 17:36:09.577218  374798 command_runner.go:130] > k8s.gcr.io/kube-proxy:v1.23.6
	I0602 17:36:09.577227  374798 command_runner.go:130] > k8s.gcr.io/kube-scheduler:v1.23.6
	I0602 17:36:09.577235  374798 command_runner.go:130] > k8s.gcr.io/kube-controller-manager:v1.23.6
	I0602 17:36:09.577249  374798 command_runner.go:130] > k8s.gcr.io/etcd:3.5.1-0
	I0602 17:36:09.577255  374798 command_runner.go:130] > k8s.gcr.io/coredns/coredns:v1.8.6
	I0602 17:36:09.577260  374798 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0602 17:36:09.577268  374798 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 17:36:09.579434  374798 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0602 17:36:09.579455  374798 docker.go:541] Images already preloaded, skipping extraction
	I0602 17:36:09.579504  374798 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 17:36:09.609223  374798 command_runner.go:130] > k8s.gcr.io/kube-apiserver:v1.23.6
	I0602 17:36:09.609252  374798 command_runner.go:130] > k8s.gcr.io/kube-scheduler:v1.23.6
	I0602 17:36:09.609260  374798 command_runner.go:130] > k8s.gcr.io/kube-controller-manager:v1.23.6
	I0602 17:36:09.609267  374798 command_runner.go:130] > k8s.gcr.io/kube-proxy:v1.23.6
	I0602 17:36:09.609274  374798 command_runner.go:130] > k8s.gcr.io/etcd:3.5.1-0
	I0602 17:36:09.609282  374798 command_runner.go:130] > k8s.gcr.io/coredns/coredns:v1.8.6
	I0602 17:36:09.609290  374798 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0602 17:36:09.609301  374798 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 17:36:09.611486  374798 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0602 17:36:09.611510  374798 cache_images.go:84] Images are preloaded, skipping loading
	I0602 17:36:09.611578  374798 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 17:36:09.692543  374798 command_runner.go:130] > cgroupfs
	I0602 17:36:09.692650  374798 cni.go:95] Creating CNI manager for ""
	I0602 17:36:09.692665  374798 cni.go:156] 1 nodes found, recommending kindnet
	I0602 17:36:09.692683  374798 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 17:36:09.692701  374798 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20220602173558-283122 NodeName:multinode-20220602173558-283122 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/
var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 17:36:09.692834  374798 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "multinode-20220602173558-283122"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 17:36:09.692918  374798 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=multinode-20220602173558-283122 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:multinode-20220602173558-283122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0602 17:36:09.692969  374798 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0602 17:36:09.699586  374798 command_runner.go:130] > kubeadm
	I0602 17:36:09.699618  374798 command_runner.go:130] > kubectl
	I0602 17:36:09.699624  374798 command_runner.go:130] > kubelet
	I0602 17:36:09.700185  374798 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 17:36:09.700243  374798 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 17:36:09.707281  374798 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (409 bytes)
	I0602 17:36:09.720168  374798 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 17:36:09.733002  374798 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2053 bytes)
	I0602 17:36:09.746161  374798 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0602 17:36:09.749231  374798 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 17:36:09.759164  374798 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122 for IP: 192.168.49.2
	I0602 17:36:09.759279  374798 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
	I0602 17:36:09.759313  374798 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
	I0602 17:36:09.759361  374798 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/client.key
	I0602 17:36:09.759385  374798 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/client.crt with IP's: []
	I0602 17:36:10.033743  374798 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/client.crt ...
	I0602 17:36:10.033783  374798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/client.crt: {Name:mk576b2d7ed9c2c793890ead1e9c37d12768cad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:36:10.033999  374798 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/client.key ...
	I0602 17:36:10.034012  374798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/client.key: {Name:mkdfbc7402d3976d73540149469e1b639252abb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:36:10.034099  374798 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.key.dd3b5fb2
	I0602 17:36:10.034115  374798 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0602 17:36:10.482422  374798 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.crt.dd3b5fb2 ...
	I0602 17:36:10.482468  374798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.crt.dd3b5fb2: {Name:mk187e82cb77f4156767f1c3963ab9622ab60a4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:36:10.482683  374798 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.key.dd3b5fb2 ...
	I0602 17:36:10.482698  374798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.key.dd3b5fb2: {Name:mk9043639c68888e46623f7542ad65b8c2cc6cb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:36:10.482790  374798 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.crt
	I0602 17:36:10.482848  374798 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.key
	I0602 17:36:10.482885  374798 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/proxy-client.key
	I0602 17:36:10.482899  374798 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/proxy-client.crt with IP's: []
	I0602 17:36:10.687605  374798 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/proxy-client.crt ...
	I0602 17:36:10.687642  374798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/proxy-client.crt: {Name:mk065ba65d3690fadd68a80e0a5ee1cc58e053c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:36:10.687873  374798 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/proxy-client.key ...
	I0602 17:36:10.687887  374798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/proxy-client.key: {Name:mkeb52e75532542ce9664b941ff32197fdada6fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:36:10.687998  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0602 17:36:10.688018  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0602 17:36:10.688027  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0602 17:36:10.688037  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0602 17:36:10.688052  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0602 17:36:10.688065  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0602 17:36:10.688079  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0602 17:36:10.688090  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0602 17:36:10.688141  374798 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/283122.pem (1338 bytes)
	W0602 17:36:10.688180  374798 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/283122_empty.pem, impossibly tiny 0 bytes
	I0602 17:36:10.688580  374798 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
	I0602 17:36:10.688644  374798 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
	I0602 17:36:10.688675  374798 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
	I0602 17:36:10.688703  374798 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1679 bytes)
	I0602 17:36:10.688769  374798 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem (1708 bytes)
	I0602 17:36:10.688811  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem -> /usr/share/ca-certificates/2831222.pem
	I0602 17:36:10.688829  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:36:10.688840  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/283122.pem -> /usr/share/ca-certificates/283122.pem
	I0602 17:36:10.690082  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 17:36:10.708457  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0602 17:36:10.725998  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 17:36:10.744712  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0602 17:36:10.763188  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 17:36:10.781831  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0602 17:36:10.799479  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 17:36:10.817089  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0602 17:36:10.835410  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem --> /usr/share/ca-certificates/2831222.pem (1708 bytes)
	I0602 17:36:10.853247  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 17:36:10.870341  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/283122.pem --> /usr/share/ca-certificates/283122.pem (1338 bytes)
	I0602 17:36:10.887232  374798 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 17:36:10.899749  374798 ssh_runner.go:195] Run: openssl version
	I0602 17:36:10.904285  374798 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0602 17:36:10.904480  374798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 17:36:10.911777  374798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:36:10.915034  374798 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:36:10.915078  374798 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:36:10.915128  374798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:36:10.920005  374798 command_runner.go:130] > b5213941
	I0602 17:36:10.920167  374798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 17:36:10.927467  374798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/283122.pem && ln -fs /usr/share/ca-certificates/283122.pem /etc/ssl/certs/283122.pem"
	I0602 17:36:10.934485  374798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/283122.pem
	I0602 17:36:10.937355  374798 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  2 17:19 /usr/share/ca-certificates/283122.pem
	I0602 17:36:10.937491  374798 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:19 /usr/share/ca-certificates/283122.pem
	I0602 17:36:10.937547  374798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/283122.pem
	I0602 17:36:10.942142  374798 command_runner.go:130] > 51391683
	I0602 17:36:10.942313  374798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/283122.pem /etc/ssl/certs/51391683.0"
	I0602 17:36:10.950061  374798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2831222.pem && ln -fs /usr/share/ca-certificates/2831222.pem /etc/ssl/certs/2831222.pem"
	I0602 17:36:10.958171  374798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2831222.pem
	I0602 17:36:10.961323  374798 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  2 17:19 /usr/share/ca-certificates/2831222.pem
	I0602 17:36:10.961410  374798 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:19 /usr/share/ca-certificates/2831222.pem
	I0602 17:36:10.961462  374798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2831222.pem
	I0602 17:36:10.966087  374798 command_runner.go:130] > 3ec20f2e
	I0602 17:36:10.966332  374798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2831222.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 17:36:10.973650  374798 kubeadm.go:395] StartCluster: {Name:multinode-20220602173558-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220602173558-283122 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 17:36:10.973792  374798 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 17:36:11.005862  374798 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 17:36:11.013407  374798 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0602 17:36:11.013446  374798 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0602 17:36:11.013452  374798 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0602 17:36:11.013515  374798 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 17:36:11.020663  374798 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 17:36:11.020732  374798 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 17:36:11.027497  374798 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0602 17:36:11.027529  374798 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0602 17:36:11.027537  374798 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0602 17:36:11.027545  374798 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 17:36:11.027583  374798 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 17:36:11.027621  374798 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 17:36:11.069588  374798 command_runner.go:130] > [init] Using Kubernetes version: v1.23.6
	I0602 17:36:11.069693  374798 command_runner.go:130] > [preflight] Running pre-flight checks
	I0602 17:36:11.250971  374798 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0602 17:36:11.251059  374798 command_runner.go:130] > KERNEL_VERSION: 5.13.0-1027-gcp
	I0602 17:36:11.251108  374798 command_runner.go:130] > DOCKER_VERSION: 20.10.16
	I0602 17:36:11.251159  374798 command_runner.go:130] > DOCKER_GRAPH_DRIVER: overlay2
	I0602 17:36:11.251205  374798 command_runner.go:130] > OS: Linux
	I0602 17:36:11.251269  374798 command_runner.go:130] > CGROUPS_CPU: enabled
	I0602 17:36:11.251330  374798 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0602 17:36:11.251434  374798 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0602 17:36:11.251514  374798 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0602 17:36:11.251600  374798 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0602 17:36:11.251666  374798 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0602 17:36:11.251725  374798 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0602 17:36:11.251805  374798 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0602 17:36:11.313777  374798 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0602 17:36:11.313873  374798 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0602 17:36:11.313949  374798 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0602 17:36:11.526849  374798 out.go:204]   - Generating certificates and keys ...
	I0602 17:36:11.523058  374798 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0602 17:36:11.527030  374798 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0602 17:36:11.527112  374798 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0602 17:36:11.637229  374798 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0602 17:36:11.892492  374798 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0602 17:36:12.119451  374798 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0602 17:36:12.265928  374798 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0602 17:36:12.335134  374798 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0602 17:36:12.335340  374798 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-20220602173558-283122] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0602 17:36:12.446667  374798 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0602 17:36:12.446825  374798 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-20220602173558-283122] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0602 17:36:12.592987  374798 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0602 17:36:12.675466  374798 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0602 17:36:12.788976  374798 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0602 17:36:12.789109  374798 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0602 17:36:12.894981  374798 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0602 17:36:13.065487  374798 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0602 17:36:13.222951  374798 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0602 17:36:13.450881  374798 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0602 17:36:13.462448  374798 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0602 17:36:13.462973  374798 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0602 17:36:13.463042  374798 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0602 17:36:13.552757  374798 out.go:204]   - Booting up control plane ...
	I0602 17:36:13.550341  374798 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0602 17:36:13.552889  374798 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0602 17:36:13.553089  374798 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0602 17:36:13.555081  374798 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0602 17:36:13.555860  374798 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0602 17:36:13.557575  374798 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0602 17:36:19.560611  374798 command_runner.go:130] > [apiclient] All control plane components are healthy after 6.003042 seconds
	I0602 17:36:19.560758  374798 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0602 17:36:19.569494  374798 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
	I0602 17:36:19.569761  374798 command_runner.go:130] > NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
	I0602 17:36:20.085297  374798 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0602 17:36:20.085701  374798 command_runner.go:130] > [mark-control-plane] Marking the node multinode-20220602173558-283122 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0602 17:36:20.595890  374798 out.go:204]   - Configuring RBAC rules ...
	I0602 17:36:20.594405  374798 command_runner.go:130] > [bootstrap-token] Using token: ause3q.bz1ngew9hbbt37ig
	I0602 17:36:20.596021  374798 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0602 17:36:20.599210  374798 command_runner.go:130] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0602 17:36:20.603784  374798 command_runner.go:130] > [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0602 17:36:20.605961  374798 command_runner.go:130] > [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0602 17:36:20.607778  374798 command_runner.go:130] > [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0602 17:36:20.609730  374798 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0602 17:36:20.617631  374798 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0602 17:36:20.790355  374798 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0602 17:36:21.037832  374798 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0602 17:36:21.039000  374798 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0602 17:36:21.039103  374798 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0602 17:36:21.039142  374798 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0602 17:36:21.039213  374798 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0602 17:36:21.039288  374798 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0602 17:36:21.039358  374798 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0602 17:36:21.039421  374798 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0602 17:36:21.039490  374798 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0602 17:36:21.039579  374798 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0602 17:36:21.039665  374798 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0602 17:36:21.039763  374798 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0602 17:36:21.039856  374798 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0602 17:36:21.039966  374798 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token ause3q.bz1ngew9hbbt37ig \
	I0602 17:36:21.040092  374798 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:63ba1911ecf093d3a1264e2d920adc95fcef1f12d9f3ed8ad760b71f9de41674 \
	I0602 17:36:21.040125  374798 command_runner.go:130] > 	--control-plane 
	I0602 17:36:21.040233  374798 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0602 17:36:21.040343  374798 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ause3q.bz1ngew9hbbt37ig \
	I0602 17:36:21.040459  374798 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:63ba1911ecf093d3a1264e2d920adc95fcef1f12d9f3ed8ad760b71f9de41674 
	I0602 17:36:21.042685  374798 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.13.0-1027-gcp\n", err: exit status 1
	I0602 17:36:21.042814  374798 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0602 17:36:21.042843  374798 cni.go:95] Creating CNI manager for ""
	I0602 17:36:21.042861  374798 cni.go:156] 1 nodes found, recommending kindnet
	I0602 17:36:21.045221  374798 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0602 17:36:21.046781  374798 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0602 17:36:21.050754  374798 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0602 17:36:21.050794  374798 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0602 17:36:21.050804  374798 command_runner.go:130] > Device: 34h/52d	Inode: 13679887    Links: 1
	I0602 17:36:21.050814  374798 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0602 17:36:21.050826  374798 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0602 17:36:21.050838  374798 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0602 17:36:21.050847  374798 command_runner.go:130] > Change: 2022-06-01 20:34:52.693415195 +0000
	I0602 17:36:21.050858  374798 command_runner.go:130] >  Birth: -
	I0602 17:36:21.050957  374798 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0602 17:36:21.050974  374798 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0602 17:36:21.068822  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0602 17:36:22.032230  374798 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0602 17:36:22.036626  374798 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0602 17:36:22.042548  374798 command_runner.go:130] > serviceaccount/kindnet created
	I0602 17:36:22.050096  374798 command_runner.go:130] > daemonset.apps/kindnet created
	I0602 17:36:22.054348  374798 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0602 17:36:22.054429  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:22.054438  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae minikube.k8s.io/name=multinode-20220602173558-283122 minikube.k8s.io/updated_at=2022_06_02T17_36_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:22.062033  374798 command_runner.go:130] > -16
	I0602 17:36:22.152157  374798 command_runner.go:130] > node/multinode-20220602173558-283122 labeled
	I0602 17:36:22.152269  374798 ops.go:34] apiserver oom_adj: -16
	I0602 17:36:22.152293  374798 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0602 17:36:22.152353  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:22.203278  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:22.706784  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:22.762148  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:23.206821  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:23.260408  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:23.707075  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:23.761907  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:24.206385  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:24.258425  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:24.706353  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:24.761858  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:25.206482  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:25.260574  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:25.706188  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:25.757216  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:26.206246  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:26.257308  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:26.706456  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:26.760266  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:27.206851  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:27.257894  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:27.707021  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:27.760746  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:28.206434  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:28.262366  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:28.707076  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:28.759529  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:29.206884  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:29.260865  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:29.706423  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:29.760778  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:30.206260  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:30.261684  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:30.706197  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:30.757716  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:31.206990  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:31.262156  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:31.706548  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:31.759500  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:32.206191  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:32.260407  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:32.706859  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:32.763090  374798 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0602 17:36:33.206769  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 17:36:33.261987  374798 command_runner.go:130] > NAME      SECRETS   AGE
	I0602 17:36:33.262014  374798 command_runner.go:130] > default   1         1s
	I0602 17:36:33.262047  374798 kubeadm.go:1045] duration metric: took 11.207681132s to wait for elevateKubeSystemPrivileges.
	I0602 17:36:33.262068  374798 kubeadm.go:397] StartCluster complete in 22.288428325s
	I0602 17:36:33.262094  374798 settings.go:142] acquiring lock: {Name:mkca69c8f6bc293fef8b552d09d771e1f2253f12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:36:33.262209  374798 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 17:36:33.262953  374798 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk4aad2ea1df51829b8bb57d56bd4d8e58dc96e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:36:33.263510  374798 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 17:36:33.263788  374798 kapi.go:59] client config for multinode-20220602173558-283122: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode
-20220602173558-283122/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17122e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0602 17:36:33.264197  374798 cert_rotation.go:137] Starting client certificate rotation controller
	I0602 17:36:33.264406  374798 round_trippers.go:463] GET https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0602 17:36:33.264422  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:33.264430  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:33.264438  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:33.271839  374798 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0602 17:36:33.271865  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:33.271877  374798 round_trippers.go:580]     Audit-Id: d4e70230-1f99-40e8-8f74-bb3ee0adf3d0
	I0602 17:36:33.271887  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:33.271896  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:33.271904  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:33.271914  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:33.271922  374798 round_trippers.go:580]     Content-Length: 291
	I0602 17:36:33.271928  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:33 GMT
	I0602 17:36:33.271961  374798 request.go:1073] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d6c769a8-f303-42bf-8481-f4eeda576acd","resourceVersion":"288","creationTimestamp":"2022-06-02T17:36:20Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0602 17:36:33.272452  374798 request.go:1073] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d6c769a8-f303-42bf-8481-f4eeda576acd","resourceVersion":"288","creationTimestamp":"2022-06-02T17:36:20Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0602 17:36:33.272512  374798 round_trippers.go:463] PUT https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0602 17:36:33.272524  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:33.272534  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:33.272545  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:33.272561  374798 round_trippers.go:473]     Content-Type: application/json
	I0602 17:36:33.276076  374798 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0602 17:36:33.276101  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:33.276112  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:33.276121  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:33.276130  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:33.276139  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:33.276148  374798 round_trippers.go:580]     Content-Length: 291
	I0602 17:36:33.276159  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:33 GMT
	I0602 17:36:33.276171  374798 round_trippers.go:580]     Audit-Id: 1aaa1c30-716b-4ef3-8ec8-f341c2d6909e
	I0602 17:36:33.276206  374798 request.go:1073] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d6c769a8-f303-42bf-8481-f4eeda576acd","resourceVersion":"405","creationTimestamp":"2022-06-02T17:36:20Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0602 17:36:33.776655  374798 round_trippers.go:463] GET https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0602 17:36:33.776686  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:33.776697  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:33.776706  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:33.781786  374798 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0602 17:36:33.781869  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:33.781887  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:33.781895  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:33.781904  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:33.781913  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:33.781921  374798 round_trippers.go:580]     Content-Length: 291
	I0602 17:36:33.781946  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:33 GMT
	I0602 17:36:33.781958  374798 round_trippers.go:580]     Audit-Id: 2944c589-af12-4bf7-bb00-4720e6024f36
	I0602 17:36:33.782004  374798 request.go:1073] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d6c769a8-f303-42bf-8481-f4eeda576acd","resourceVersion":"419","creationTimestamp":"2022-06-02T17:36:20Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0602 17:36:33.782149  374798 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20220602173558-283122" rescaled to 1
	I0602 17:36:33.782223  374798 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 17:36:33.784370  374798 out.go:177] * Verifying Kubernetes components...
	I0602 17:36:33.782546  374798 config.go:178] Loaded profile config "multinode-20220602173558-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 17:36:33.782608  374798 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0602 17:36:33.782649  374798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0602 17:36:33.786236  374798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 17:36:33.786402  374798 addons.go:65] Setting storage-provisioner=true in profile "multinode-20220602173558-283122"
	I0602 17:36:33.786429  374798 addons.go:153] Setting addon storage-provisioner=true in "multinode-20220602173558-283122"
	W0602 17:36:33.786438  374798 addons.go:165] addon storage-provisioner should already be in state true
	I0602 17:36:33.786506  374798 host.go:66] Checking if "multinode-20220602173558-283122" exists ...
	I0602 17:36:33.787046  374798 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122 --format={{.State.Status}}
	I0602 17:36:33.787140  374798 addons.go:65] Setting default-storageclass=true in profile "multinode-20220602173558-283122"
	I0602 17:36:33.787187  374798 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20220602173558-283122"
	I0602 17:36:33.787588  374798 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122 --format={{.State.Status}}
	I0602 17:36:33.828497  374798 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 17:36:33.828818  374798 kapi.go:59] client config for multinode-20220602173558-283122: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode
-20220602173558-283122/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17122e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0602 17:36:33.829275  374798 round_trippers.go:463] GET https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0602 17:36:33.829299  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:33.829313  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:33.829328  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:33.832446  374798 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 17:36:33.834436  374798 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 17:36:33.834463  374798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0602 17:36:33.834523  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:33.836595  374798 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0602 17:36:33.836618  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:33.836628  374798 round_trippers.go:580]     Audit-Id: 5468a20c-4dd2-4eec-8614-10ec1e9d3ee6
	I0602 17:36:33.836636  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:33.836644  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:33.836654  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:33.836667  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:33.836685  374798 round_trippers.go:580]     Content-Length: 109
	I0602 17:36:33.836704  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:33 GMT
	I0602 17:36:33.836738  374798 request.go:1073] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"455"},"items":[]}
	I0602 17:36:33.837129  374798 addons.go:153] Setting addon default-storageclass=true in "multinode-20220602173558-283122"
	W0602 17:36:33.837154  374798 addons.go:165] addon default-storageclass should already be in state true
	I0602 17:36:33.837200  374798 host.go:66] Checking if "multinode-20220602173558-283122" exists ...
	I0602 17:36:33.837632  374798 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122 --format={{.State.Status}}
	I0602 17:36:33.875134  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49517 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122/id_rsa Username:docker}
	I0602 17:36:33.876471  374798 command_runner.go:130] > apiVersion: v1
	I0602 17:36:33.876496  374798 command_runner.go:130] > data:
	I0602 17:36:33.876503  374798 command_runner.go:130] >   Corefile: |
	I0602 17:36:33.876509  374798 command_runner.go:130] >     .:53 {
	I0602 17:36:33.876515  374798 command_runner.go:130] >         errors
	I0602 17:36:33.876522  374798 command_runner.go:130] >         health {
	I0602 17:36:33.876529  374798 command_runner.go:130] >            lameduck 5s
	I0602 17:36:33.876535  374798 command_runner.go:130] >         }
	I0602 17:36:33.876542  374798 command_runner.go:130] >         ready
	I0602 17:36:33.876556  374798 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0602 17:36:33.876567  374798 command_runner.go:130] >            pods insecure
	I0602 17:36:33.876578  374798 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0602 17:36:33.876587  374798 command_runner.go:130] >            ttl 30
	I0602 17:36:33.876594  374798 command_runner.go:130] >         }
	I0602 17:36:33.876605  374798 command_runner.go:130] >         prometheus :9153
	I0602 17:36:33.876617  374798 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0602 17:36:33.876625  374798 command_runner.go:130] >            max_concurrent 1000
	I0602 17:36:33.876637  374798 command_runner.go:130] >         }
	I0602 17:36:33.876647  374798 command_runner.go:130] >         cache 30
	I0602 17:36:33.876654  374798 command_runner.go:130] >         loop
	I0602 17:36:33.876664  374798 command_runner.go:130] >         reload
	I0602 17:36:33.876671  374798 command_runner.go:130] >         loadbalance
	I0602 17:36:33.876680  374798 command_runner.go:130] >     }
	I0602 17:36:33.876688  374798 command_runner.go:130] > kind: ConfigMap
	I0602 17:36:33.876692  374798 command_runner.go:130] > metadata:
	I0602 17:36:33.876703  374798 command_runner.go:130] >   creationTimestamp: "2022-06-02T17:36:20Z"
	I0602 17:36:33.876714  374798 command_runner.go:130] >   name: coredns
	I0602 17:36:33.876721  374798 command_runner.go:130] >   namespace: kube-system
	I0602 17:36:33.876737  374798 command_runner.go:130] >   resourceVersion: "284"
	I0602 17:36:33.876749  374798 command_runner.go:130] >   uid: 29709329-131e-44df-a33f-835deca75ba9
	I0602 17:36:33.876903  374798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0602 17:36:33.877209  374798 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 17:36:33.877504  374798 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0602 17:36:33.877525  374798 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0602 17:36:33.877583  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:33.877530  374798 kapi.go:59] client config for multinode-20220602173558-283122: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode
-20220602173558-283122/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17122e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0602 17:36:33.877809  374798 node_ready.go:35] waiting up to 6m0s for node "multinode-20220602173558-283122" to be "Ready" ...
	I0602 17:36:33.877887  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:33.877898  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:33.877911  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:33.877925  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:33.880544  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:33.880571  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:33.880581  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:33.880590  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:33 GMT
	I0602 17:36:33.880600  374798 round_trippers.go:580]     Audit-Id: d723961b-622c-4ea2-9d7b-59a46bfd5432
	I0602 17:36:33.880609  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:33.880619  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:33.880627  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:33.880750  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:33.913635  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49517 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122/id_rsa Username:docker}
	I0602 17:36:34.048392  374798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 17:36:34.049045  374798 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 17:36:34.162342  374798 command_runner.go:130] > configmap/coredns replaced
	I0602 17:36:34.162383  374798 start.go:806] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0602 17:36:34.382725  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:34.382759  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:34.382772  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:34.382782  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:34.436445  374798 round_trippers.go:574] Response Status: 200 OK in 53 milliseconds
	I0602 17:36:34.436476  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:34.436488  374798 round_trippers.go:580]     Audit-Id: be4afe3f-428d-4230-8bcb-26b5470216ec
	I0602 17:36:34.436496  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:34.436505  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:34.436514  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:34.436523  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:34.436532  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:34 GMT
	I0602 17:36:34.436661  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:34.483293  374798 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0602 17:36:34.483328  374798 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0602 17:36:34.483340  374798 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0602 17:36:34.483353  374798 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0602 17:36:34.483360  374798 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0602 17:36:34.483368  374798 command_runner.go:130] > pod/storage-provisioner created
	I0602 17:36:34.483467  374798 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0602 17:36:34.486487  374798 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0602 17:36:34.487937  374798 addons.go:417] enableAddons completed in 705.329715ms
	I0602 17:36:34.881753  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:34.881782  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:34.881792  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:34.881798  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:34.884460  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:34.884491  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:34.884500  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:34.884509  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:34.884518  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:34 GMT
	I0602 17:36:34.884527  374798 round_trippers.go:580]     Audit-Id: 8214eca4-6a2c-4773-9576-ead6cc5ca3a2
	I0602 17:36:34.884538  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:34.884546  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:34.884684  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:35.382229  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:35.382254  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:35.382263  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:35.382269  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:35.384964  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:35.384998  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:35.385027  374798 round_trippers.go:580]     Audit-Id: 0b7399db-25d6-4dd7-a0ea-fd2ccd6d26dd
	I0602 17:36:35.385039  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:35.385050  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:35.385063  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:35.385075  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:35.385093  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:35 GMT
	I0602 17:36:35.385250  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:35.881818  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:35.881846  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:35.881859  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:35.881869  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:35.884559  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:35.884601  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:35.884615  374798 round_trippers.go:580]     Audit-Id: 1c0a3a32-9450-4c1e-a800-b69505de7227
	I0602 17:36:35.884625  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:35.884636  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:35.884646  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:35.884674  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:35.884684  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:35 GMT
	I0602 17:36:35.884824  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:35.885214  374798 node_ready.go:58] node "multinode-20220602173558-283122" has status "Ready":"False"
	I0602 17:36:36.382383  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:36.382412  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:36.382424  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:36.382430  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:36.385104  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:36.385138  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:36.385151  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:36.385161  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:36.385168  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:36.385174  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:36 GMT
	I0602 17:36:36.385180  374798 round_trippers.go:580]     Audit-Id: 78c55018-e381-4482-affc-b43695cd4d31
	I0602 17:36:36.385193  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:36.385311  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:36.881881  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:36.881911  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:36.881924  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:36.881932  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:36.884312  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:36.884339  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:36.884347  374798 round_trippers.go:580]     Audit-Id: 8f713047-a40a-4c5c-8002-d7e2fdcb6a7b
	I0602 17:36:36.884353  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:36.884359  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:36.884364  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:36.884369  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:36.884374  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:36 GMT
	I0602 17:36:36.884577  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:37.381784  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:37.381811  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:37.381823  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:37.381831  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:37.384948  374798 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0602 17:36:37.385039  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:37.385060  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:37.385069  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:37.385079  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:37 GMT
	I0602 17:36:37.385089  374798 round_trippers.go:580]     Audit-Id: acc91c11-3a35-44a5-ac6d-5f81df1e7892
	I0602 17:36:37.385103  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:37.385113  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:37.385269  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:37.882340  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:37.882363  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:37.882372  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:37.882378  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:37.884935  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:37.884960  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:37.884968  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:37.884974  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:37.884980  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:37.884986  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:37.884994  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:37 GMT
	I0602 17:36:37.885002  374798 round_trippers.go:580]     Audit-Id: cc30f438-39cd-480f-b557-86be325b2401
	I0602 17:36:37.885151  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:37.885470  374798 node_ready.go:58] node "multinode-20220602173558-283122" has status "Ready":"False"
	I0602 17:36:38.382132  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:38.382155  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:38.382164  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:38.382173  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:38.384600  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:38.384631  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:38.384642  374798 round_trippers.go:580]     Audit-Id: ceb14e74-25b6-49af-9a63-2ae776c0038a
	I0602 17:36:38.384650  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:38.384659  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:38.384667  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:38.384684  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:38.384710  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:38 GMT
	I0602 17:36:38.384842  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:38.882277  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:38.882304  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:38.882313  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:38.882319  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:38.884766  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:38.884796  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:38.884807  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:38.884816  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:38 GMT
	I0602 17:36:38.884826  374798 round_trippers.go:580]     Audit-Id: 43bbdb5c-78f8-470d-be90-044079b09979
	I0602 17:36:38.884835  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:38.884847  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:38.884856  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:38.884938  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:39.382268  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:39.382293  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:39.382305  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:39.382314  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:39.384941  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:39.384973  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:39.384984  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:39.384992  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:39.385001  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:39.385025  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:39.385034  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:39 GMT
	I0602 17:36:39.385047  374798 round_trippers.go:580]     Audit-Id: c57b02ee-3a67-498a-940c-538cc59c9453
	I0602 17:36:39.385162  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:39.882475  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:39.882504  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:39.882513  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:39.882520  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:39.885049  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:39.885077  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:39.885085  374798 round_trippers.go:580]     Audit-Id: 18978d43-e2b1-455e-9fc8-7f151e5f73d9
	I0602 17:36:39.885091  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:39.885096  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:39.885102  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:39.885107  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:39.885113  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:39 GMT
	I0602 17:36:39.885261  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:39.885589  374798 node_ready.go:58] node "multinode-20220602173558-283122" has status "Ready":"False"
	I0602 17:36:40.381716  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:40.381743  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:40.381752  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:40.381758  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:40.384943  374798 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0602 17:36:40.384973  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:40.384984  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:40 GMT
	I0602 17:36:40.384992  374798 round_trippers.go:580]     Audit-Id: ccefe130-264f-439d-b847-b78f155330df
	I0602 17:36:40.385000  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:40.385027  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:40.385038  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:40.385052  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:40.385162  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:40.881784  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:40.881812  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:40.881825  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:40.881836  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:40.884219  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:40.884247  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:40.884260  374798 round_trippers.go:580]     Audit-Id: 00fe19e7-9b18-44ec-8aae-ccb952b4c770
	I0602 17:36:40.884270  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:40.884279  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:40.884288  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:40.884302  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:40.884314  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:40 GMT
	I0602 17:36:40.884444  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:41.381898  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:41.381925  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:41.381935  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:41.381941  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:41.384644  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:41.384669  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:41.384680  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:41.384691  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:41.384700  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:41.384713  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:41.384732  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:41 GMT
	I0602 17:36:41.384741  374798 round_trippers.go:580]     Audit-Id: 5b21a3d2-1435-4963-8388-b858c3a7c7f2
	I0602 17:36:41.384870  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"420","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5041 chars]
	I0602 17:36:41.882270  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:41.882295  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:41.882304  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:41.882310  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:41.884892  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:41.884928  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:41.884938  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:41.884945  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:41 GMT
	I0602 17:36:41.884950  374798 round_trippers.go:580]     Audit-Id: 339cbd7d-9077-4c5e-abae-6364661fdbe1
	I0602 17:36:41.884955  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:41.884961  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:41.884966  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:41.885116  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"486","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 4896 chars]
	I0602 17:36:41.885520  374798 node_ready.go:49] node "multinode-20220602173558-283122" has status "Ready":"True"
	I0602 17:36:41.885550  374798 node_ready.go:38] duration metric: took 8.007719429s waiting for node "multinode-20220602173558-283122" to be "Ready" ...
	I0602 17:36:41.885565  374798 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 17:36:41.885659  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0602 17:36:41.885675  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:41.885686  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:41.885696  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:41.889487  374798 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0602 17:36:41.889519  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:41.889532  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:41.889542  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:41.889551  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:41.889561  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:41 GMT
	I0602 17:36:41.889570  374798 round_trippers.go:580]     Audit-Id: 822bdea6-7c93-4e70-8bf6-4adb46c14093
	I0602 17:36:41.889582  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:41.890064  374798 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"492"},"items":[{"metadata":{"name":"coredns-64897985d-l5jxv","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"d796da5e-d4e3-4761-84e2-c742ea94211a","resourceVersion":"492","creationTimestamp":"2022-06-02T17:36:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"20656fc2-6eb0-4735-8ce1-d983b9816977","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20656fc2-6eb0-4735-8ce1-d983b9816977\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:a
rgs":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{}," [truncated 55559 chars]
	I0602 17:36:41.893684  374798 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-l5jxv" in "kube-system" namespace to be "Ready" ...
	I0602 17:36:41.893770  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-64897985d-l5jxv
	I0602 17:36:41.893783  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:41.893793  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:41.893805  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:41.896166  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:41.896191  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:41.896202  374798 round_trippers.go:580]     Audit-Id: 77625ecc-db4c-428d-8185-dac7f7506624
	I0602 17:36:41.896211  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:41.896253  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:41.896262  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:41.896272  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:41.896280  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:41 GMT
	I0602 17:36:41.896436  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-64897985d-l5jxv","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"d796da5e-d4e3-4761-84e2-c742ea94211a","resourceVersion":"492","creationTimestamp":"2022-06-02T17:36:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"20656fc2-6eb0-4735-8ce1-d983b9816977","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20656fc2-6eb0-4735-8ce1-d983b9816977\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:live
nessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path" [truncated 5852 chars]
	I0602 17:36:41.897095  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:41.897121  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:41.897134  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:41.897145  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:41.899440  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:41.899464  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:41.899475  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:41.899484  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:41.899499  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:41 GMT
	I0602 17:36:41.899513  374798 round_trippers.go:580]     Audit-Id: 5415fdb7-ea2f-4059-ac32-e38c587a15c5
	I0602 17:36:41.899525  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:41.899535  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:41.899645  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"486","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 4896 chars]
	I0602 17:36:42.400904  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-64897985d-l5jxv
	I0602 17:36:42.400936  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:42.400947  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:42.400957  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:42.403451  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:42.403481  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:42.403493  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:42.403502  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:42.403515  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:42.403528  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:42 GMT
	I0602 17:36:42.403537  374798 round_trippers.go:580]     Audit-Id: 3705a7ba-f89f-40a0-8741-17a77d8c14f2
	I0602 17:36:42.403542  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:42.403650  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-64897985d-l5jxv","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"d796da5e-d4e3-4761-84e2-c742ea94211a","resourceVersion":"492","creationTimestamp":"2022-06-02T17:36:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"20656fc2-6eb0-4735-8ce1-d983b9816977","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20656fc2-6eb0-4735-8ce1-d983b9816977\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:live
nessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path" [truncated 5852 chars]
	I0602 17:36:42.404117  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:42.404135  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:42.404144  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:42.404150  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:42.406145  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:36:42.406166  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:42.406178  374798 round_trippers.go:580]     Audit-Id: 490707a4-7c37-49b5-a7a9-9a06dc2599ee
	I0602 17:36:42.406187  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:42.406203  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:42.406216  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:42.406229  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:42.406242  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:42 GMT
	I0602 17:36:42.406341  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"486","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 4896 chars]
	I0602 17:36:42.900969  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-64897985d-l5jxv
	I0602 17:36:42.901001  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:42.901026  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:42.901038  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:42.903594  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:42.903630  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:42.903639  374798 round_trippers.go:580]     Audit-Id: ecfab05a-579f-4fdc-be30-6bdc5e7ad588
	I0602 17:36:42.903646  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:42.903653  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:42.903659  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:42.903667  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:42.903676  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:42 GMT
	I0602 17:36:42.903862  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-64897985d-l5jxv","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"d796da5e-d4e3-4761-84e2-c742ea94211a","resourceVersion":"504","creationTimestamp":"2022-06-02T17:36:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"20656fc2-6eb0-4735-8ce1-d983b9816977","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20656fc2-6eb0-4735-8ce1-d983b9816977\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:live
nessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path" [truncated 5979 chars]
	I0602 17:36:42.904318  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:42.904332  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:42.904340  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:42.904346  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:42.908145  374798 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0602 17:36:42.908174  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:42.908185  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:42.908193  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:42.908203  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:42.908217  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:42.908229  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:42 GMT
	I0602 17:36:42.908244  374798 round_trippers.go:580]     Audit-Id: 43fe0ab3-9516-4693-adba-af4d86f892c2
	I0602 17:36:42.908352  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"486","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 4896 chars]
	I0602 17:36:42.908675  374798 pod_ready.go:92] pod "coredns-64897985d-l5jxv" in "kube-system" namespace has status "Ready":"True"
	I0602 17:36:42.908702  374798 pod_ready.go:81] duration metric: took 1.014987777s waiting for pod "coredns-64897985d-l5jxv" in "kube-system" namespace to be "Ready" ...
	I0602 17:36:42.908716  374798 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:36:42.908778  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20220602173558-283122
	I0602 17:36:42.908789  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:42.908801  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:42.908811  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:42.910714  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:36:42.910734  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:42.910741  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:42.910754  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:42.910771  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:42.910776  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:42 GMT
	I0602 17:36:42.910785  374798 round_trippers.go:580]     Audit-Id: f83c3364-3cb8-4451-b9bf-39c1e747eea7
	I0602 17:36:42.910791  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:42.910870  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220602173558-283122","namespace":"kube-system","uid":"2de3dc57-6748-4c20-bf65-3b2cbd2f8a0f","resourceVersion":"331","creationTimestamp":"2022-06-02T17:36:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"de8e89a254ff48460c714894f4297613","kubernetes.io/config.mirror":"de8e89a254ff48460c714894f4297613","kubernetes.io/config.seen":"2022-06-02T17:36:20.948851860Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:21Z","fieldsType":"FieldsV1","
fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes. [truncated 5804 chars]
	I0602 17:36:42.911245  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:42.911264  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:42.911274  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:42.911284  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:42.912971  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:36:42.912997  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:42.913007  374798 round_trippers.go:580]     Audit-Id: ba2b8761-4995-451f-8bd5-a8b8676f8068
	I0602 17:36:42.913044  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:42.913056  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:42.913071  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:42.913084  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:42.913097  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:42 GMT
	I0602 17:36:42.913181  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"486","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 4896 chars]
	I0602 17:36:42.913448  374798 pod_ready.go:92] pod "etcd-multinode-20220602173558-283122" in "kube-system" namespace has status "Ready":"True"
	I0602 17:36:42.913458  374798 pod_ready.go:81] duration metric: took 4.73125ms waiting for pod "etcd-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:36:42.913487  374798 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:36:42.913534  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220602173558-283122
	I0602 17:36:42.913544  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:42.913550  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:42.913557  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:42.915483  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:36:42.915512  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:42.915524  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:42.915534  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:42.915549  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:42.915562  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:42 GMT
	I0602 17:36:42.915577  374798 round_trippers.go:580]     Audit-Id: 436a6ae5-9eed-4887-a032-fad125d6652c
	I0602 17:36:42.915591  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:42.915761  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220602173558-283122","namespace":"kube-system","uid":"999b4342-ad8c-46aa-a5a0-bdd14089e393","resourceVersion":"326","creationTimestamp":"2022-06-02T17:36:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"9b88133a9e18b6ec7d53499c3c2debcc","kubernetes.io/config.mirror":"9b88133a9e18b6ec7d53499c3c2debcc","kubernetes.io/config.seen":"2022-06-02T17:36:13.934512571Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:20Z
","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{". [truncated 8313 chars]
	I0602 17:36:42.916327  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:42.916344  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:42.916356  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:42.916373  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:42.918167  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:36:42.918191  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:42.918198  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:42 GMT
	I0602 17:36:42.918203  374798 round_trippers.go:580]     Audit-Id: e56b1373-dd97-4e77-834f-6cc07d70be06
	I0602 17:36:42.918208  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:42.918213  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:42.918218  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:42.918229  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:42.918299  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"486","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 4896 chars]
	I0602 17:36:42.918572  374798 pod_ready.go:92] pod "kube-apiserver-multinode-20220602173558-283122" in "kube-system" namespace has status "Ready":"True"
	I0602 17:36:42.918582  374798 pod_ready.go:81] duration metric: took 5.085479ms waiting for pod "kube-apiserver-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:36:42.918590  374798 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:36:42.918672  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220602173558-283122
	I0602 17:36:42.918684  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:42.918691  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:42.918697  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:42.920425  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:36:42.920446  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:42.920454  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:42.920459  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:42.920466  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:42.920475  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:42 GMT
	I0602 17:36:42.920492  374798 round_trippers.go:580]     Audit-Id: d3343b46-d1d3-4f0a-b863-66487bb1200b
	I0602 17:36:42.920502  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:42.920654  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220602173558-283122","namespace":"kube-system","uid":"dc0ed8b1-4d22-46e1-a708-fee470e6c6fe","resourceVersion":"330","creationTimestamp":"2022-06-02T17:36:21Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"73385622a1da8f9c02cb3f38e98edff7","kubernetes.io/config.mirror":"73385622a1da8f9c02cb3f38e98edff7","kubernetes.io/config.seen":"2022-06-02T17:36:20.948871153Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":
{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror [truncated 7888 chars]
	I0602 17:36:42.921122  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:42.921138  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:42.921145  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:42.921151  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:42.922746  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:36:42.922769  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:42.922780  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:42.922789  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:42.922798  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:42.922815  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:42 GMT
	I0602 17:36:42.922823  374798 round_trippers.go:580]     Audit-Id: 8426c5dc-f4c9-4c8a-a3c8-7ba2fe486e5f
	I0602 17:36:42.922837  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:42.922919  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"486","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 4896 chars]
	I0602 17:36:42.923203  374798 pod_ready.go:92] pod "kube-controller-manager-multinode-20220602173558-283122" in "kube-system" namespace has status "Ready":"True"
	I0602 17:36:42.923217  374798 pod_ready.go:81] duration metric: took 4.617927ms waiting for pod "kube-controller-manager-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:36:42.923225  374798 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q8c4p" in "kube-system" namespace to be "Ready" ...
	I0602 17:36:42.923264  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8c4p
	I0602 17:36:42.923272  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:42.923279  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:42.923285  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:42.924944  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:36:42.924964  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:42.924972  374798 round_trippers.go:580]     Audit-Id: b616c995-8018-4bb2-97ce-fecc1ade34cb
	I0602 17:36:42.924981  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:42.924988  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:42.924997  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:42.925034  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:42.925048  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:42 GMT
	I0602 17:36:42.925133  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q8c4p","generateName":"kube-proxy-","namespace":"kube-system","uid":"f1878b35-b1dd-4c80-b1c8-6848ceeac02c","resourceVersion":"475","creationTimestamp":"2022-06-02T17:36:33Z","labels":{"controller-revision-hash":"549f7469d9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fdf29778-5a78-41df-8413-a6f3417a1d56","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fdf29778-5a78-41df-8413-a6f3417a1d56\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5544 chars]
	I0602 17:36:43.082925  374798 request.go:533] Waited for 157.393761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:43.082990  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:43.082996  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:43.083004  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:43.083011  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:43.085682  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:43.085714  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:43.085724  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:43 GMT
	I0602 17:36:43.085733  374798 round_trippers.go:580]     Audit-Id: 03d7d418-fca5-41d5-bc3b-74accadba0d8
	I0602 17:36:43.085743  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:43.085752  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:43.085761  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:43.085773  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:43.085883  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"486","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 4896 chars]
	I0602 17:36:43.086286  374798 pod_ready.go:92] pod "kube-proxy-q8c4p" in "kube-system" namespace has status "Ready":"True"
	I0602 17:36:43.086304  374798 pod_ready.go:81] duration metric: took 163.072826ms waiting for pod "kube-proxy-q8c4p" in "kube-system" namespace to be "Ready" ...
	I0602 17:36:43.086314  374798 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:36:43.282466  374798 request.go:533] Waited for 196.055078ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220602173558-283122
	I0602 17:36:43.282533  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220602173558-283122
	I0602 17:36:43.282540  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:43.282549  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:43.282559  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:43.284908  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:43.284929  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:43.284936  374798 round_trippers.go:580]     Audit-Id: 4f2bde1b-7804-42d5-a0eb-4e1ad1850442
	I0602 17:36:43.284942  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:43.284948  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:43.284952  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:43.284958  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:43.284963  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:43 GMT
	I0602 17:36:43.285111  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220602173558-283122","namespace":"kube-system","uid":"b207e4b1-a64d-4aaf-bd7b-5eaec8e23004","resourceVersion":"366","creationTimestamp":"2022-06-02T17:36:21Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"16c53ed8f606fa43c821fa27956bef6a","kubernetes.io/config.mirror":"16c53ed8f606fa43c821fa27956bef6a","kubernetes.io/config.seen":"2022-06-02T17:36:20.948872903Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kuberne
tes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes [truncated 4770 chars]
	I0602 17:36:43.482565  374798 request.go:533] Waited for 197.034057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:43.482640  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:36:43.482647  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:43.482659  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:43.482668  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:43.485181  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:43.485209  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:43.485216  374798 round_trippers.go:580]     Audit-Id: ed18adef-a534-489a-9f90-6be29c81d2a0
	I0602 17:36:43.485222  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:43.485227  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:43.485233  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:43.485238  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:43.485243  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:43 GMT
	I0602 17:36:43.485389  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"486","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 4896 chars]
	I0602 17:36:43.485803  374798 pod_ready.go:92] pod "kube-scheduler-multinode-20220602173558-283122" in "kube-system" namespace has status "Ready":"True"
	I0602 17:36:43.485821  374798 pod_ready.go:81] duration metric: took 399.498567ms waiting for pod "kube-scheduler-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:36:43.485835  374798 pod_ready.go:38] duration metric: took 1.600244091s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 17:36:43.485868  374798 api_server.go:51] waiting for apiserver process to appear ...
	I0602 17:36:43.485921  374798 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 17:36:43.495835  374798 command_runner.go:130] > 1706
	I0602 17:36:43.496693  374798 api_server.go:71] duration metric: took 9.714421861s to wait for apiserver process to appear ...
	I0602 17:36:43.496726  374798 api_server.go:87] waiting for apiserver healthz status ...
	I0602 17:36:43.496739  374798 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0602 17:36:43.501406  374798 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0602 17:36:43.501469  374798 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0602 17:36:43.501477  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:43.501490  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:43.501500  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:43.502296  374798 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0602 17:36:43.502317  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:43.502324  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:43.502330  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:43.502336  374798 round_trippers.go:580]     Content-Length: 263
	I0602 17:36:43.502344  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:43 GMT
	I0602 17:36:43.502353  374798 round_trippers.go:580]     Audit-Id: 419a73b5-8362-4424-b5c3-b4d710067b29
	I0602 17:36:43.502363  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:43.502378  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:43.502399  374798 request.go:1073] Response Body: {
	  "major": "1",
	  "minor": "23",
	  "gitVersion": "v1.23.6",
	  "gitCommit": "ad3338546da947756e8a88aa6822e9c11e7eac22",
	  "gitTreeState": "clean",
	  "buildDate": "2022-04-14T08:43:11Z",
	  "goVersion": "go1.17.9",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0602 17:36:43.502499  374798 api_server.go:140] control plane version: v1.23.6
	I0602 17:36:43.502515  374798 api_server.go:130] duration metric: took 5.782895ms to wait for apiserver health ...
	I0602 17:36:43.502524  374798 system_pods.go:43] waiting for kube-system pods to appear ...
	I0602 17:36:43.682929  374798 request.go:533] Waited for 180.322156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0602 17:36:43.683002  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0602 17:36:43.683007  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:43.683016  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:43.683024  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:43.686518  374798 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0602 17:36:43.686544  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:43.686552  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:43.686558  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:43.686564  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:43.686572  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:43 GMT
	I0602 17:36:43.686581  374798 round_trippers.go:580]     Audit-Id: 806d015f-7669-4f06-b512-26ed9d66431a
	I0602 17:36:43.686590  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:43.687066  374798 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"509"},"items":[{"metadata":{"name":"coredns-64897985d-l5jxv","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"d796da5e-d4e3-4761-84e2-c742ea94211a","resourceVersion":"504","creationTimestamp":"2022-06-02T17:36:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"20656fc2-6eb0-4735-8ce1-d983b9816977","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20656fc2-6eb0-4735-8ce1-d983b9816977\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:a
rgs":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{}," [truncated 55670 chars]
	I0602 17:36:43.689544  374798 system_pods.go:59] 8 kube-system pods found
	I0602 17:36:43.689576  374798 system_pods.go:61] "coredns-64897985d-l5jxv" [d796da5e-d4e3-4761-84e2-c742ea94211a] Running
	I0602 17:36:43.689585  374798 system_pods.go:61] "etcd-multinode-20220602173558-283122" [2de3dc57-6748-4c20-bf65-3b2cbd2f8a0f] Running
	I0602 17:36:43.689594  374798 system_pods.go:61] "kindnet-d4jwl" [02c02672-9134-4bb8-abdb-c15c1f3334ac] Running
	I0602 17:36:43.689605  374798 system_pods.go:61] "kube-apiserver-multinode-20220602173558-283122" [999b4342-ad8c-46aa-a5a0-bdd14089e393] Running
	I0602 17:36:43.689610  374798 system_pods.go:61] "kube-controller-manager-multinode-20220602173558-283122" [dc0ed8b1-4d22-46e1-a708-fee470e6c6fe] Running
	I0602 17:36:43.689623  374798 system_pods.go:61] "kube-proxy-q8c4p" [f1878b35-b1dd-4c80-b1c8-6848ceeac02c] Running
	I0602 17:36:43.689634  374798 system_pods.go:61] "kube-scheduler-multinode-20220602173558-283122" [b207e4b1-a64d-4aaf-bd7b-5eaec8e23004] Running
	I0602 17:36:43.689645  374798 system_pods.go:61] "storage-provisioner" [8a59fe44-28d0-431f-9a48-d4f0705d7d5a] Running
	I0602 17:36:43.689656  374798 system_pods.go:74] duration metric: took 187.120137ms to wait for pod list to return data ...
	I0602 17:36:43.689670  374798 default_sa.go:34] waiting for default service account to be created ...
	I0602 17:36:43.883111  374798 request.go:533] Waited for 193.354977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0602 17:36:43.883190  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0602 17:36:43.883195  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:43.883203  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:43.883210  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:43.885786  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:43.885817  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:43.885829  374798 round_trippers.go:580]     Content-Length: 304
	I0602 17:36:43.885838  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:43 GMT
	I0602 17:36:43.885847  374798 round_trippers.go:580]     Audit-Id: 868485cc-c800-4162-a124-56b1f73a5702
	I0602 17:36:43.885861  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:43.885875  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:43.885885  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:43.885898  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:43.885930  374798 request.go:1073] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"509"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"756fbbad-4c5e-4dbf-b9ed-2ce106f7008e","resourceVersion":"400","creationTimestamp":"2022-06-02T17:36:32Z"},"secrets":[{"name":"default-token-jdglt"}]}]}
	I0602 17:36:43.886187  374798 default_sa.go:45] found service account: "default"
	I0602 17:36:43.886206  374798 default_sa.go:55] duration metric: took 196.526159ms for default service account to be created ...
	I0602 17:36:43.886219  374798 system_pods.go:116] waiting for k8s-apps to be running ...
	I0602 17:36:44.082682  374798 request.go:533] Waited for 196.347313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0602 17:36:44.082750  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0602 17:36:44.082759  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:44.082772  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:44.082792  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:44.086440  374798 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0602 17:36:44.086467  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:44.086475  374798 round_trippers.go:580]     Audit-Id: 820a461e-8614-4528-afd2-8a5a29ee3a6f
	I0602 17:36:44.086480  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:44.086486  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:44.086491  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:44.086496  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:44.086501  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:44 GMT
	I0602 17:36:44.086986  374798 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"509"},"items":[{"metadata":{"name":"coredns-64897985d-l5jxv","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"d796da5e-d4e3-4761-84e2-c742ea94211a","resourceVersion":"504","creationTimestamp":"2022-06-02T17:36:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"20656fc2-6eb0-4735-8ce1-d983b9816977","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20656fc2-6eb0-4735-8ce1-d983b9816977\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:a
rgs":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{}," [truncated 55670 chars]
	I0602 17:36:44.089539  374798 system_pods.go:86] 8 kube-system pods found
	I0602 17:36:44.089573  374798 system_pods.go:89] "coredns-64897985d-l5jxv" [d796da5e-d4e3-4761-84e2-c742ea94211a] Running
	I0602 17:36:44.089582  374798 system_pods.go:89] "etcd-multinode-20220602173558-283122" [2de3dc57-6748-4c20-bf65-3b2cbd2f8a0f] Running
	I0602 17:36:44.089589  374798 system_pods.go:89] "kindnet-d4jwl" [02c02672-9134-4bb8-abdb-c15c1f3334ac] Running
	I0602 17:36:44.089595  374798 system_pods.go:89] "kube-apiserver-multinode-20220602173558-283122" [999b4342-ad8c-46aa-a5a0-bdd14089e393] Running
	I0602 17:36:44.089600  374798 system_pods.go:89] "kube-controller-manager-multinode-20220602173558-283122" [dc0ed8b1-4d22-46e1-a708-fee470e6c6fe] Running
	I0602 17:36:44.089611  374798 system_pods.go:89] "kube-proxy-q8c4p" [f1878b35-b1dd-4c80-b1c8-6848ceeac02c] Running
	I0602 17:36:44.089623  374798 system_pods.go:89] "kube-scheduler-multinode-20220602173558-283122" [b207e4b1-a64d-4aaf-bd7b-5eaec8e23004] Running
	I0602 17:36:44.089632  374798 system_pods.go:89] "storage-provisioner" [8a59fe44-28d0-431f-9a48-d4f0705d7d5a] Running
	I0602 17:36:44.089642  374798 system_pods.go:126] duration metric: took 203.41266ms to wait for k8s-apps to be running ...
	I0602 17:36:44.089656  374798 system_svc.go:44] waiting for kubelet service to be running ....
	I0602 17:36:44.089712  374798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 17:36:44.099440  374798 system_svc.go:56] duration metric: took 9.774081ms WaitForService to wait for kubelet.
	I0602 17:36:44.099473  374798 kubeadm.go:572] duration metric: took 10.317204042s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0602 17:36:44.099501  374798 node_conditions.go:102] verifying NodePressure condition ...
	I0602 17:36:44.282943  374798 request.go:533] Waited for 183.345954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0602 17:36:44.283014  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0602 17:36:44.283022  374798 round_trippers.go:469] Request Headers:
	I0602 17:36:44.283032  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:36:44.283039  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:36:44.285457  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:36:44.285478  374798 round_trippers.go:577] Response Headers:
	I0602 17:36:44.285485  374798 round_trippers.go:580]     Audit-Id: 8639181d-4b44-434d-9cbd-b521c4a33faa
	I0602 17:36:44.285490  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:36:44.285496  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:36:44.285502  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:36:44.285511  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:36:44.285519  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:36:44 GMT
	I0602 17:36:44.285616  374798 request.go:1073] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"510"},"items":[{"metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"486","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"
0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"ma [truncated 4949 chars]
	I0602 17:36:44.286014  374798 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0602 17:36:44.286035  374798 node_conditions.go:123] node cpu capacity is 8
	I0602 17:36:44.286057  374798 node_conditions.go:105] duration metric: took 186.549549ms to run NodePressure ...
	I0602 17:36:44.286072  374798 start.go:213] waiting for startup goroutines ...
	I0602 17:36:44.288611  374798 out.go:177] 
	I0602 17:36:44.290446  374798 config.go:178] Loaded profile config "multinode-20220602173558-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 17:36:44.290537  374798 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/config.json ...
	I0602 17:36:44.292480  374798 out.go:177] * Starting worker node multinode-20220602173558-283122-m02 in cluster multinode-20220602173558-283122
	I0602 17:36:44.294454  374798 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 17:36:44.295822  374798 out.go:177] * Pulling base image ...
	I0602 17:36:44.297124  374798 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 17:36:44.297147  374798 cache.go:57] Caching tarball of preloaded images
	I0602 17:36:44.297214  374798 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 17:36:44.297279  374798 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 17:36:44.297302  374798 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 17:36:44.297389  374798 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/config.json ...
	I0602 17:36:44.341787  374798 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 17:36:44.341821  374798 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 17:36:44.341836  374798 cache.go:206] Successfully downloaded all kic artifacts
	I0602 17:36:44.341872  374798 start.go:352] acquiring machines lock for multinode-20220602173558-283122-m02: {Name:mke593d81dca3b8fcdc7cb8fbeba179c36b6a97d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 17:36:44.342010  374798 start.go:356] acquired machines lock for "multinode-20220602173558-283122-m02" in 117.787µs
	I0602 17:36:44.342036  374798 start.go:91] Provisioning new machine with config: &{Name:multinode-20220602173558-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220602173558-283122 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name:m02 IP: Port:0 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0602 17:36:44.342114  374798 start.go:131] createHost starting for "m02" (driver="docker")
	I0602 17:36:44.344583  374798 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0602 17:36:44.344676  374798 start.go:165] libmachine.API.Create for "multinode-20220602173558-283122" (driver="docker")
	I0602 17:36:44.344705  374798 client.go:168] LocalClient.Create starting
	I0602 17:36:44.344774  374798 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem
	I0602 17:36:44.344801  374798 main.go:134] libmachine: Decoding PEM data...
	I0602 17:36:44.344819  374798 main.go:134] libmachine: Parsing certificate...
	I0602 17:36:44.344879  374798 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem
	I0602 17:36:44.344902  374798 main.go:134] libmachine: Decoding PEM data...
	I0602 17:36:44.344915  374798 main.go:134] libmachine: Parsing certificate...
	I0602 17:36:44.345145  374798 cli_runner.go:164] Run: docker network inspect multinode-20220602173558-283122 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 17:36:44.375349  374798 network_create.go:76] Found existing network {name:multinode-20220602173558-283122 subnet:0xc000a9c000 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0602 17:36:44.375419  374798 kic.go:106] calculated static IP "192.168.49.3" for the "multinode-20220602173558-283122-m02" container
	I0602 17:36:44.375473  374798 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0602 17:36:44.406475  374798 cli_runner.go:164] Run: docker volume create multinode-20220602173558-283122-m02 --label name.minikube.sigs.k8s.io=multinode-20220602173558-283122-m02 --label created_by.minikube.sigs.k8s.io=true
	I0602 17:36:44.440789  374798 oci.go:103] Successfully created a docker volume multinode-20220602173558-283122-m02
	I0602 17:36:44.440871  374798 cli_runner.go:164] Run: docker run --rm --name multinode-20220602173558-283122-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20220602173558-283122-m02 --entrypoint /usr/bin/test -v multinode-20220602173558-283122-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib
	I0602 17:36:44.986137  374798 oci.go:107] Successfully prepared a docker volume multinode-20220602173558-283122-m02
	I0602 17:36:44.986188  374798 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 17:36:44.986211  374798 kic.go:179] Starting extracting preloaded images to volume ...
	I0602 17:36:44.986271  374798 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20220602173558-283122-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir
	I0602 17:36:51.696213  374798 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-20220602173558-283122-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir: (6.709881411s)
	I0602 17:36:51.696251  374798 kic.go:188] duration metric: took 6.710035 seconds to extract preloaded images to volume
	W0602 17:36:51.696370  374798 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0602 17:36:51.696485  374798 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0602 17:36:51.801401  374798 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-20220602173558-283122-m02 --name multinode-20220602173558-283122-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-20220602173558-283122-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-20220602173558-283122-m02 --network multinode-20220602173558-283122 --ip 192.168.49.3 --volume multinode-20220602173558-283122-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496
	I0602 17:36:52.211385  374798 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122-m02 --format={{.State.Running}}
	I0602 17:36:52.247826  374798 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122-m02 --format={{.State.Status}}
	I0602 17:36:52.282505  374798 cli_runner.go:164] Run: docker exec multinode-20220602173558-283122-m02 stat /var/lib/dpkg/alternatives/iptables
	I0602 17:36:52.345188  374798 oci.go:247] the created container "multinode-20220602173558-283122-m02" has a running status.
	I0602 17:36:52.345242  374798 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122-m02/id_rsa...
	I0602 17:36:52.597364  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0602 17:36:52.597413  374798 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0602 17:36:52.690257  374798 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122-m02 --format={{.State.Status}}
	I0602 17:36:52.725456  374798 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0602 17:36:52.725482  374798 kic_runner.go:114] Args: [docker exec --privileged multinode-20220602173558-283122-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0602 17:36:52.811992  374798 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122-m02 --format={{.State.Status}}
	I0602 17:36:52.846024  374798 machine.go:88] provisioning docker machine ...
	I0602 17:36:52.846070  374798 ubuntu.go:169] provisioning hostname "multinode-20220602173558-283122-m02"
	I0602 17:36:52.846134  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122-m02
	I0602 17:36:52.880202  374798 main.go:134] libmachine: Using SSH client type: native
	I0602 17:36:52.880409  374798 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49522 <nil> <nil>}
	I0602 17:36:52.880435  374798 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-20220602173558-283122-m02 && echo "multinode-20220602173558-283122-m02" | sudo tee /etc/hostname
	I0602 17:36:53.006413  374798 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-20220602173558-283122-m02
	
	I0602 17:36:53.006499  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122-m02
	I0602 17:36:53.039516  374798 main.go:134] libmachine: Using SSH client type: native
	I0602 17:36:53.039740  374798 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49522 <nil> <nil>}
	I0602 17:36:53.039776  374798 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20220602173558-283122-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20220602173558-283122-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20220602173558-283122-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 17:36:53.153468  374798 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 17:36:53.153499  374798 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem
ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 17:36:53.153520  374798 ubuntu.go:177] setting up certificates
	I0602 17:36:53.153530  374798 provision.go:83] configureAuth start
	I0602 17:36:53.153578  374798 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220602173558-283122-m02
	I0602 17:36:53.186533  374798 provision.go:138] copyHostCerts
	I0602 17:36:53.186586  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 17:36:53.186621  374798 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
	I0602 17:36:53.186635  374798 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 17:36:53.186713  374798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 17:36:53.186791  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 17:36:53.186812  374798 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
	I0602 17:36:53.186821  374798 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 17:36:53.186848  374798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 17:36:53.186897  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 17:36:53.186921  374798 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
	I0602 17:36:53.186930  374798 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 17:36:53.186957  374798 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1679 bytes)
	I0602 17:36:53.187006  374798 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.multinode-20220602173558-283122-m02 san=[192.168.49.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-20220602173558-283122-m02]
	I0602 17:36:53.299400  374798 provision.go:172] copyRemoteCerts
	I0602 17:36:53.299469  374798 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 17:36:53.299508  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122-m02
	I0602 17:36:53.331449  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49522 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122-m02/id_rsa Username:docker}
	I0602 17:36:53.420721  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0602 17:36:53.420803  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 17:36:53.439328  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0602 17:36:53.439402  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0602 17:36:53.458058  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0602 17:36:53.458122  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0602 17:36:53.475939  374798 provision.go:86] duration metric: configureAuth took 322.394179ms
	I0602 17:36:53.475975  374798 ubuntu.go:193] setting minikube options for container-runtime
	I0602 17:36:53.476199  374798 config.go:178] Loaded profile config "multinode-20220602173558-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 17:36:53.476263  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122-m02
	I0602 17:36:53.507994  374798 main.go:134] libmachine: Using SSH client type: native
	I0602 17:36:53.508159  374798 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49522 <nil> <nil>}
	I0602 17:36:53.508177  374798 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 17:36:53.621424  374798 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 17:36:53.621455  374798 ubuntu.go:71] root file system type: overlay
	I0602 17:36:53.621617  374798 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 17:36:53.621674  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122-m02
	I0602 17:36:53.654568  374798 main.go:134] libmachine: Using SSH client type: native
	I0602 17:36:53.654763  374798 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49522 <nil> <nil>}
	I0602 17:36:53.654863  374798 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 17:36:53.778683  374798 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 17:36:53.778777  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122-m02
	I0602 17:36:53.812021  374798 main.go:134] libmachine: Using SSH client type: native
	I0602 17:36:53.812202  374798 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49522 <nil> <nil>}
	I0602 17:36:53.812231  374798 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 17:36:54.467417  374798 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-02 17:36:53.775987983 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+Environment=NO_PROXY=192.168.49.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0602 17:36:54.467453  374798 machine.go:91] provisioned docker machine in 1.621404077s
	I0602 17:36:54.467464  374798 client.go:171] LocalClient.Create took 10.122750699s
	I0602 17:36:54.467483  374798 start.go:173] duration metric: libmachine.API.Create for "multinode-20220602173558-283122" took 10.122803983s
	I0602 17:36:54.467492  374798 start.go:306] post-start starting for "multinode-20220602173558-283122-m02" (driver="docker")
	I0602 17:36:54.467500  374798 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 17:36:54.467568  374798 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 17:36:54.467619  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122-m02
	I0602 17:36:54.498860  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49522 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122-m02/id_rsa Username:docker}
	I0602 17:36:54.584793  374798 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 17:36:54.587757  374798 command_runner.go:130] > NAME="Ubuntu"
	I0602 17:36:54.587790  374798 command_runner.go:130] > VERSION="20.04.4 LTS (Focal Fossa)"
	I0602 17:36:54.587798  374798 command_runner.go:130] > ID=ubuntu
	I0602 17:36:54.587806  374798 command_runner.go:130] > ID_LIKE=debian
	I0602 17:36:54.587819  374798 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.4 LTS"
	I0602 17:36:54.587827  374798 command_runner.go:130] > VERSION_ID="20.04"
	I0602 17:36:54.587841  374798 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0602 17:36:54.587850  374798 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0602 17:36:54.587858  374798 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0602 17:36:54.587871  374798 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0602 17:36:54.587878  374798 command_runner.go:130] > VERSION_CODENAME=focal
	I0602 17:36:54.587883  374798 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0602 17:36:54.587952  374798 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 17:36:54.587971  374798 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 17:36:54.587981  374798 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 17:36:54.587987  374798 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 17:36:54.588002  374798 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 17:36:54.588064  374798 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 17:36:54.588135  374798 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem -> 2831222.pem in /etc/ssl/certs
	I0602 17:36:54.588148  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem -> /etc/ssl/certs/2831222.pem
	I0602 17:36:54.588240  374798 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 17:36:54.595360  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem --> /etc/ssl/certs/2831222.pem (1708 bytes)
	I0602 17:36:54.612951  374798 start.go:309] post-start completed in 145.441924ms
	I0602 17:36:54.613330  374798 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220602173558-283122-m02
	I0602 17:36:54.644946  374798 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/config.json ...
	I0602 17:36:54.645211  374798 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 17:36:54.645257  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122-m02
	I0602 17:36:54.676454  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49522 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122-m02/id_rsa Username:docker}
	I0602 17:36:54.757644  374798 command_runner.go:130] > 22%!
	(MISSING)I0602 17:36:54.757725  374798 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 17:36:54.761748  374798 command_runner.go:130] > 227G
	I0602 17:36:54.761788  374798 start.go:134] duration metric: createHost completed in 10.419665986s
	I0602 17:36:54.761797  374798 start.go:81] releasing machines lock for "multinode-20220602173558-283122-m02", held for 10.419773753s
	I0602 17:36:54.761891  374798 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220602173558-283122-m02
	I0602 17:36:54.797959  374798 out.go:177] * Found network options:
	I0602 17:36:54.799774  374798 out.go:177]   - NO_PROXY=192.168.49.2
	W0602 17:36:54.801498  374798 proxy.go:118] fail to check proxy env: Error ip not in block
	W0602 17:36:54.801553  374798 proxy.go:118] fail to check proxy env: Error ip not in block
	I0602 17:36:54.801647  374798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 17:36:54.801701  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122-m02
	I0602 17:36:54.801749  374798 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 17:36:54.801820  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122-m02
	I0602 17:36:54.835138  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49522 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122-m02/id_rsa Username:docker}
	I0602 17:36:54.835553  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49522 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122-m02/id_rsa Username:docker}
	I0602 17:36:54.923443  374798 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 17:36:54.938156  374798 command_runner.go:130] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0602 17:36:54.938185  374798 command_runner.go:130] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0602 17:36:54.938191  374798 command_runner.go:130] > <H1>302 Moved</H1>
	I0602 17:36:54.938195  374798 command_runner.go:130] > The document has moved
	I0602 17:36:54.938203  374798 command_runner.go:130] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0602 17:36:54.938206  374798 command_runner.go:130] > </BODY></HTML>
	I0602 17:36:54.939795  374798 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0602 17:36:54.939813  374798 command_runner.go:130] > [Unit]
	I0602 17:36:54.939821  374798 command_runner.go:130] > Description=Docker Application Container Engine
	I0602 17:36:54.939826  374798 command_runner.go:130] > Documentation=https://docs.docker.com
	I0602 17:36:54.939831  374798 command_runner.go:130] > BindsTo=containerd.service
	I0602 17:36:54.939836  374798 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0602 17:36:54.939840  374798 command_runner.go:130] > Wants=network-online.target
	I0602 17:36:54.939846  374798 command_runner.go:130] > Requires=docker.socket
	I0602 17:36:54.939850  374798 command_runner.go:130] > StartLimitBurst=3
	I0602 17:36:54.939854  374798 command_runner.go:130] > StartLimitIntervalSec=60
	I0602 17:36:54.939858  374798 command_runner.go:130] > [Service]
	I0602 17:36:54.939861  374798 command_runner.go:130] > Type=notify
	I0602 17:36:54.939865  374798 command_runner.go:130] > Restart=on-failure
	I0602 17:36:54.939873  374798 command_runner.go:130] > Environment=NO_PROXY=192.168.49.2
	I0602 17:36:54.939880  374798 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0602 17:36:54.939888  374798 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0602 17:36:54.939900  374798 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0602 17:36:54.939906  374798 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0602 17:36:54.939912  374798 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0602 17:36:54.939922  374798 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0602 17:36:54.939932  374798 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0602 17:36:54.939943  374798 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0602 17:36:54.939952  374798 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0602 17:36:54.939958  374798 command_runner.go:130] > ExecStart=
	I0602 17:36:54.939974  374798 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0602 17:36:54.939984  374798 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0602 17:36:54.939994  374798 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0602 17:36:54.940003  374798 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0602 17:36:54.940011  374798 command_runner.go:130] > LimitNOFILE=infinity
	I0602 17:36:54.940015  374798 command_runner.go:130] > LimitNPROC=infinity
	I0602 17:36:54.940020  374798 command_runner.go:130] > LimitCORE=infinity
	I0602 17:36:54.940025  374798 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0602 17:36:54.940034  374798 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0602 17:36:54.940043  374798 command_runner.go:130] > TasksMax=infinity
	I0602 17:36:54.940047  374798 command_runner.go:130] > TimeoutStartSec=0
	I0602 17:36:54.940053  374798 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0602 17:36:54.940060  374798 command_runner.go:130] > Delegate=yes
	I0602 17:36:54.940065  374798 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0602 17:36:54.940072  374798 command_runner.go:130] > KillMode=process
	I0602 17:36:54.940078  374798 command_runner.go:130] > [Install]
	I0602 17:36:54.940086  374798 command_runner.go:130] > WantedBy=multi-user.target
	I0602 17:36:54.940109  374798 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 17:36:54.940154  374798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 17:36:54.950787  374798 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 17:36:54.962999  374798 command_runner.go:130] > runtime-endpoint: unix:///var/run/dockershim.sock
	I0602 17:36:54.963025  374798 command_runner.go:130] > image-endpoint: unix:///var/run/dockershim.sock
	I0602 17:36:54.963806  374798 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 17:36:55.043848  374798 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 17:36:55.124923  374798 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 17:36:55.134535  374798 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0602 17:36:55.134568  374798 command_runner.go:130] > [Unit]
	I0602 17:36:55.134576  374798 command_runner.go:130] > Description=Docker Application Container Engine
	I0602 17:36:55.134582  374798 command_runner.go:130] > Documentation=https://docs.docker.com
	I0602 17:36:55.134586  374798 command_runner.go:130] > BindsTo=containerd.service
	I0602 17:36:55.134591  374798 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0602 17:36:55.134596  374798 command_runner.go:130] > Wants=network-online.target
	I0602 17:36:55.134601  374798 command_runner.go:130] > Requires=docker.socket
	I0602 17:36:55.134606  374798 command_runner.go:130] > StartLimitBurst=3
	I0602 17:36:55.134610  374798 command_runner.go:130] > StartLimitIntervalSec=60
	I0602 17:36:55.134614  374798 command_runner.go:130] > [Service]
	I0602 17:36:55.134617  374798 command_runner.go:130] > Type=notify
	I0602 17:36:55.134622  374798 command_runner.go:130] > Restart=on-failure
	I0602 17:36:55.134626  374798 command_runner.go:130] > Environment=NO_PROXY=192.168.49.2
	I0602 17:36:55.134638  374798 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0602 17:36:55.134653  374798 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0602 17:36:55.134660  374798 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0602 17:36:55.134671  374798 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0602 17:36:55.134681  374798 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0602 17:36:55.134692  374798 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0602 17:36:55.134704  374798 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0602 17:36:55.134717  374798 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0602 17:36:55.134724  374798 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0602 17:36:55.134732  374798 command_runner.go:130] > ExecStart=
	I0602 17:36:55.134746  374798 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0602 17:36:55.134758  374798 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0602 17:36:55.134765  374798 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0602 17:36:55.134776  374798 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0602 17:36:55.134784  374798 command_runner.go:130] > LimitNOFILE=infinity
	I0602 17:36:55.134789  374798 command_runner.go:130] > LimitNPROC=infinity
	I0602 17:36:55.134801  374798 command_runner.go:130] > LimitCORE=infinity
	I0602 17:36:55.134810  374798 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0602 17:36:55.134816  374798 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0602 17:36:55.134824  374798 command_runner.go:130] > TasksMax=infinity
	I0602 17:36:55.134828  374798 command_runner.go:130] > TimeoutStartSec=0
	I0602 17:36:55.134838  374798 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0602 17:36:55.134846  374798 command_runner.go:130] > Delegate=yes
	I0602 17:36:55.134852  374798 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0602 17:36:55.134863  374798 command_runner.go:130] > KillMode=process
	I0602 17:36:55.134880  374798 command_runner.go:130] > [Install]
	I0602 17:36:55.134889  374798 command_runner.go:130] > WantedBy=multi-user.target
	I0602 17:36:55.134949  374798 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 17:36:55.214195  374798 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 17:36:55.223884  374798 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 17:36:55.263439  374798 command_runner.go:130] > 20.10.16
	I0602 17:36:55.263530  374798 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 17:36:55.301060  374798 command_runner.go:130] > 20.10.16
	I0602 17:36:55.306653  374798 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0602 17:36:55.308446  374798 out.go:177]   - env NO_PROXY=192.168.49.2
	I0602 17:36:55.310059  374798 cli_runner.go:164] Run: docker network inspect multinode-20220602173558-283122 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 17:36:55.342062  374798 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0602 17:36:55.345425  374798 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 17:36:55.355035  374798 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122 for IP: 192.168.49.3
	I0602 17:36:55.355156  374798 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
	I0602 17:36:55.355211  374798 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
	I0602 17:36:55.355228  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0602 17:36:55.355252  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0602 17:36:55.355273  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0602 17:36:55.355292  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0602 17:36:55.355356  374798 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/283122.pem (1338 bytes)
	W0602 17:36:55.355396  374798 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/283122_empty.pem, impossibly tiny 0 bytes
	I0602 17:36:55.355415  374798 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
	I0602 17:36:55.355453  374798 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
	I0602 17:36:55.355491  374798 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
	I0602 17:36:55.355525  374798 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1679 bytes)
	I0602 17:36:55.355581  374798 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem (1708 bytes)
	I0602 17:36:55.355620  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/283122.pem -> /usr/share/ca-certificates/283122.pem
	I0602 17:36:55.355638  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem -> /usr/share/ca-certificates/2831222.pem
	I0602 17:36:55.355656  374798 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:36:55.355993  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 17:36:55.374151  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0602 17:36:55.392059  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 17:36:55.410425  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0602 17:36:55.428111  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/283122.pem --> /usr/share/ca-certificates/283122.pem (1338 bytes)
	I0602 17:36:55.446381  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem --> /usr/share/ca-certificates/2831222.pem (1708 bytes)
	I0602 17:36:55.464492  374798 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 17:36:55.483111  374798 ssh_runner.go:195] Run: openssl version
	I0602 17:36:55.487989  374798 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0602 17:36:55.488129  374798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2831222.pem && ln -fs /usr/share/ca-certificates/2831222.pem /etc/ssl/certs/2831222.pem"
	I0602 17:36:55.495955  374798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2831222.pem
	I0602 17:36:55.499293  374798 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun  2 17:19 /usr/share/ca-certificates/2831222.pem
	I0602 17:36:55.499326  374798 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:19 /usr/share/ca-certificates/2831222.pem
	I0602 17:36:55.499362  374798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2831222.pem
	I0602 17:36:55.504251  374798 command_runner.go:130] > 3ec20f2e
	I0602 17:36:55.504461  374798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2831222.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 17:36:55.512147  374798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 17:36:55.519767  374798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:36:55.522783  374798 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:36:55.522908  374798 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:36:55.522981  374798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 17:36:55.527832  374798 command_runner.go:130] > b5213941
	I0602 17:36:55.527927  374798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 17:36:55.535436  374798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/283122.pem && ln -fs /usr/share/ca-certificates/283122.pem /etc/ssl/certs/283122.pem"
	I0602 17:36:55.542708  374798 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/283122.pem
	I0602 17:36:55.545784  374798 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun  2 17:19 /usr/share/ca-certificates/283122.pem
	I0602 17:36:55.545825  374798 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:19 /usr/share/ca-certificates/283122.pem
	I0602 17:36:55.545867  374798 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/283122.pem
	I0602 17:36:55.550670  374798 command_runner.go:130] > 51391683
	I0602 17:36:55.550748  374798 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/283122.pem /etc/ssl/certs/51391683.0"
	I0602 17:36:55.558622  374798 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 17:36:55.639426  374798 command_runner.go:130] > cgroupfs
	I0602 17:36:55.641649  374798 cni.go:95] Creating CNI manager for ""
	I0602 17:36:55.641679  374798 cni.go:156] 2 nodes found, recommending kindnet
	I0602 17:36:55.641701  374798 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 17:36:55.641721  374798 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.3 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20220602173558-283122 NodeName:multinode-20220602173558-283122-m02 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.3 CgroupDriver:cgroupfs ClientCAFi
le:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 17:36:55.641869  374798 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "multinode-20220602173558-283122-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 17:36:55.641971  374798 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=multinode-20220602173558-283122-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:multinode-20220602173558-283122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0602 17:36:55.642030  374798 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0602 17:36:55.649308  374798 command_runner.go:130] > kubeadm
	I0602 17:36:55.649334  374798 command_runner.go:130] > kubectl
	I0602 17:36:55.649338  374798 command_runner.go:130] > kubelet
	I0602 17:36:55.649360  374798 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 17:36:55.649404  374798 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0602 17:36:55.656316  374798 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (413 bytes)
	I0602 17:36:55.669783  374798 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 17:36:55.682702  374798 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0602 17:36:55.688257  374798 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 17:36:55.698947  374798 host.go:66] Checking if "multinode-20220602173558-283122" exists ...
	I0602 17:36:55.699233  374798 config.go:178] Loaded profile config "multinode-20220602173558-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 17:36:55.699222  374798 start.go:282] JoinCluster: &{Name:multinode-20220602173558-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:multinode-20220602173558-283122 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 17:36:55.699317  374798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0602 17:36:55.699362  374798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:36:55.732570  374798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49517 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122/id_rsa Username:docker}
	I0602 17:36:55.859293  374798 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 684lf2.rh4yuqaps0jw4imj --discovery-token-ca-cert-hash sha256:63ba1911ecf093d3a1264e2d920adc95fcef1f12d9f3ed8ad760b71f9de41674 
	I0602 17:36:55.863005  374798 start.go:303] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0602 17:36:55.863063  374798 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 684lf2.rh4yuqaps0jw4imj --discovery-token-ca-cert-hash sha256:63ba1911ecf093d3a1264e2d920adc95fcef1f12d9f3ed8ad760b71f9de41674 --ignore-preflight-errors=all --cri-socket /var/run/dockershim.sock --node-name=multinode-20220602173558-283122-m02"
	I0602 17:36:55.895653  374798 command_runner.go:130] > [preflight] Running pre-flight checks
	I0602 17:36:56.073363  374798 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0602 17:36:56.073388  374798 command_runner.go:130] > KERNEL_VERSION: 5.13.0-1027-gcp
	I0602 17:36:56.073395  374798 command_runner.go:130] > DOCKER_VERSION: 20.10.16
	I0602 17:36:56.073402  374798 command_runner.go:130] > DOCKER_GRAPH_DRIVER: overlay2
	I0602 17:36:56.073410  374798 command_runner.go:130] > OS: Linux
	I0602 17:36:56.073418  374798 command_runner.go:130] > CGROUPS_CPU: enabled
	I0602 17:36:56.073426  374798 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0602 17:36:56.073479  374798 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0602 17:36:56.073521  374798 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0602 17:36:56.073535  374798 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0602 17:36:56.073540  374798 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0602 17:36:56.073547  374798 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0602 17:36:56.073552  374798 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0602 17:36:56.167564  374798 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0602 17:36:56.167605  374798 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0602 17:36:56.527278  374798 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0602 17:36:56.527318  374798 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0602 17:36:56.527331  374798 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0602 17:36:56.610859  374798 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0602 17:37:02.152383  374798 command_runner.go:130] > This node has joined the cluster:
	I0602 17:37:02.152418  374798 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0602 17:37:02.152427  374798 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0602 17:37:02.152437  374798 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0602 17:37:02.155436  374798 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.13.0-1027-gcp\n", err: exit status 1
	I0602 17:37:02.155473  374798 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0602 17:37:02.155498  374798 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 684lf2.rh4yuqaps0jw4imj --discovery-token-ca-cert-hash sha256:63ba1911ecf093d3a1264e2d920adc95fcef1f12d9f3ed8ad760b71f9de41674 --ignore-preflight-errors=all --cri-socket /var/run/dockershim.sock --node-name=multinode-20220602173558-283122-m02": (6.29241467s)
	I0602 17:37:02.155522  374798 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0602 17:37:02.536064  374798 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0602 17:37:02.536113  374798 start.go:284] JoinCluster complete in 6.836888566s
	I0602 17:37:02.536128  374798 cni.go:95] Creating CNI manager for ""
	I0602 17:37:02.536135  374798 cni.go:156] 2 nodes found, recommending kindnet
	I0602 17:37:02.536212  374798 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0602 17:37:02.540818  374798 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0602 17:37:02.540847  374798 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0602 17:37:02.540857  374798 command_runner.go:130] > Device: 34h/52d	Inode: 13679887    Links: 1
	I0602 17:37:02.540867  374798 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0602 17:37:02.540874  374798 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0602 17:37:02.540883  374798 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0602 17:37:02.540891  374798 command_runner.go:130] > Change: 2022-06-01 20:34:52.693415195 +0000
	I0602 17:37:02.540897  374798 command_runner.go:130] >  Birth: -
	I0602 17:37:02.540992  374798 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0602 17:37:02.541007  374798 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0602 17:37:02.557296  374798 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0602 17:37:02.718429  374798 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0602 17:37:02.718456  374798 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0602 17:37:02.718465  374798 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0602 17:37:02.718473  374798 command_runner.go:130] > daemonset.apps/kindnet configured
	I0602 17:37:02.718517  374798 start.go:208] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:0 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0602 17:37:02.721952  374798 out.go:177] * Verifying Kubernetes components...
	I0602 17:37:02.723546  374798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 17:37:02.746634  374798 loader.go:372] Config loaded from file:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 17:37:02.747051  374798 kapi.go:59] client config for multinode-20220602173558-283122: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode-20220602173558-283122/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/multinode
-20220602173558-283122/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17122e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0602 17:37:02.747389  374798 node_ready.go:35] waiting up to 6m0s for node "multinode-20220602173558-283122-m02" to be "Ready" ...
	I0602 17:37:02.747463  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:02.747474  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:02.747488  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:02.747498  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:02.750028  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:02.750061  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:02.750072  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:02.750082  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:02.750090  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:02.750098  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:02.750107  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:02 GMT
	I0602 17:37:02.750116  374798 round_trippers.go:580]     Audit-Id: 2a73fdb1-be7d-4062-8e8e-7d3820b8c0f3
	I0602 17:37:02.750242  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"564","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4204 chars]
	I0602 17:37:03.251017  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:03.251044  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:03.251053  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:03.251060  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:03.253904  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:03.253937  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:03.253948  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:03.253958  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:03.253968  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:03.253977  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:03 GMT
	I0602 17:37:03.253992  374798 round_trippers.go:580]     Audit-Id: d9aa50f4-0531-43bf-bd96-4ccd3c9bdb92
	I0602 17:37:03.254001  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:03.254146  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"564","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4204 chars]
	I0602 17:37:03.750796  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:03.750826  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:03.750836  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:03.750842  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:03.753326  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:03.753358  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:03.753370  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:03.753381  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:03.753387  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:03.753393  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:03.753401  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:03 GMT
	I0602 17:37:03.753409  374798 round_trippers.go:580]     Audit-Id: f1832d90-36b1-449c-9dfd-60c6aea763c3
	I0602 17:37:03.753638  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"564","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4204 chars]
	I0602 17:37:04.251020  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:04.251043  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:04.251053  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:04.251059  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:04.254333  374798 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0602 17:37:04.254368  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:04.254379  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:04.254388  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:04.254397  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:04.254408  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:04.254426  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:04 GMT
	I0602 17:37:04.254435  374798 round_trippers.go:580]     Audit-Id: 6477f7ac-5536-48a8-b3f5-ab3d812c1a52
	I0602 17:37:04.254543  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"564","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4204 chars]
	I0602 17:37:04.751066  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:04.751094  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:04.751103  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:04.751110  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:04.753613  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:04.753644  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:04.753656  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:04.753663  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:04.753672  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:04.753679  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:04.753700  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:04 GMT
	I0602 17:37:04.753709  374798 round_trippers.go:580]     Audit-Id: 961a8781-98bc-4fef-bfb5-1f1ee80d0ffe
	I0602 17:37:04.753837  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"564","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4204 chars]
	I0602 17:37:04.754178  374798 node_ready.go:58] node "multinode-20220602173558-283122-m02" has status "Ready":"False"
	I0602 17:37:05.251341  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:05.251369  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:05.251378  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:05.251387  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:05.254147  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:05.254177  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:05.254189  374798 round_trippers.go:580]     Audit-Id: 3067cd18-b128-4514-87b5-1ce91eaf4393
	I0602 17:37:05.254198  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:05.254203  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:05.254213  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:05.254218  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:05.254227  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:05 GMT
	I0602 17:37:05.254327  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"564","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4204 chars]
	I0602 17:37:05.750886  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:05.750910  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:05.750919  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:05.750925  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:05.753653  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:05.753681  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:05.753697  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:05.753705  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:05.753714  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:05.753721  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:05.753730  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:05 GMT
	I0602 17:37:05.753743  374798 round_trippers.go:580]     Audit-Id: f6f8f29f-dcad-4492-9ddb-7d7e6b994403
	I0602 17:37:05.753868  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"564","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4204 chars]
	I0602 17:37:06.251537  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:06.251567  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:06.251576  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:06.251582  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:06.254265  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:06.254296  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:06.254308  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:06.254317  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:06.254323  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:06 GMT
	I0602 17:37:06.254329  374798 round_trippers.go:580]     Audit-Id: b187ece0-eb04-45a6-8a27-dc7754e62d89
	I0602 17:37:06.254338  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:06.254345  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:06.254471  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"564","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4204 chars]
	I0602 17:37:06.751046  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:06.751072  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:06.751081  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:06.751087  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:06.753678  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:06.753706  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:06.753721  374798 round_trippers.go:580]     Audit-Id: 1f77e5a9-98a1-4166-84ec-f77de3ae3588
	I0602 17:37:06.753730  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:06.753739  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:06.753752  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:06.753764  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:06.753773  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:06 GMT
	I0602 17:37:06.753883  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"564","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4204 chars]
	I0602 17:37:07.251580  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:07.251608  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:07.251617  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:07.251623  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:07.253946  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:07.253969  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:07.253977  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:07.253982  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:07.253988  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:07 GMT
	I0602 17:37:07.253994  374798 round_trippers.go:580]     Audit-Id: c89f70f1-5199-4b61-a207-af61788eed44
	I0602 17:37:07.253999  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:07.254006  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:07.254123  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"564","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4204 chars]
	I0602 17:37:07.254415  374798 node_ready.go:58] node "multinode-20220602173558-283122-m02" has status "Ready":"False"
	I0602 17:37:07.751785  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:07.751811  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:07.751821  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:07.751834  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:07.754282  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:07.754306  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:07.754314  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:07 GMT
	I0602 17:37:07.754319  374798 round_trippers.go:580]     Audit-Id: 91687ef8-4653-4d35-86ab-e2e798308e98
	I0602 17:37:07.754325  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:07.754330  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:07.754335  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:07.754340  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:07.754452  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"564","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4204 chars]
	I0602 17:37:08.251558  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:08.251589  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.251598  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.251604  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.254063  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:08.254096  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.254107  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.254115  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.254124  374798 round_trippers.go:580]     Audit-Id: 771634dd-db16-45c7-b519-2ec9ff11035c
	I0602 17:37:08.254133  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.254149  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.254158  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.254268  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"576","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4239 chars]
	I0602 17:37:08.254566  374798 node_ready.go:49] node "multinode-20220602173558-283122-m02" has status "Ready":"True"
	I0602 17:37:08.254587  374798 node_ready.go:38] duration metric: took 5.507178903s waiting for node "multinode-20220602173558-283122-m02" to be "Ready" ...
	I0602 17:37:08.254596  374798 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 17:37:08.254651  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0602 17:37:08.254659  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.254667  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.254673  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.257871  374798 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0602 17:37:08.257894  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.257902  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.257908  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.257914  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.257922  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.257931  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.257953  374798 round_trippers.go:580]     Audit-Id: 5252500e-bac6-4306-8e54-9be10b2b09fb
	I0602 17:37:08.258612  374798 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"576"},"items":[{"metadata":{"name":"coredns-64897985d-l5jxv","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"d796da5e-d4e3-4761-84e2-c742ea94211a","resourceVersion":"504","creationTimestamp":"2022-06-02T17:36:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"20656fc2-6eb0-4735-8ce1-d983b9816977","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20656fc2-6eb0-4735-8ce1-d983b9816977\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:a
rgs":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{}," [truncated 69085 chars]
	I0602 17:37:08.260759  374798 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-l5jxv" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:08.260831  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-64897985d-l5jxv
	I0602 17:37:08.260851  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.260859  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.260868  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.262676  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:37:08.262710  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.262719  374798 round_trippers.go:580]     Audit-Id: 00ebb1ff-f41a-4ada-a895-89d1c8e57e9f
	I0602 17:37:08.262733  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.262746  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.262758  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.262767  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.262777  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.262889  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-64897985d-l5jxv","generateName":"coredns-64897985d-","namespace":"kube-system","uid":"d796da5e-d4e3-4761-84e2-c742ea94211a","resourceVersion":"504","creationTimestamp":"2022-06-02T17:36:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"64897985d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-64897985d","uid":"20656fc2-6eb0-4735-8ce1-d983b9816977","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"20656fc2-6eb0-4735-8ce1-d983b9816977\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:live
nessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path" [truncated 5979 chars]
	I0602 17:37:08.263334  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:37:08.263348  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.263356  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.263366  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.265272  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:37:08.265300  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.265310  374798 round_trippers.go:580]     Audit-Id: 9e4e36f3-f3e0-4fef-b8d7-eb6655537896
	I0602 17:37:08.265319  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.265326  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.265338  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.265351  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.265361  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.265459  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"516","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5073 chars]
	I0602 17:37:08.265739  374798 pod_ready.go:92] pod "coredns-64897985d-l5jxv" in "kube-system" namespace has status "Ready":"True"
	I0602 17:37:08.265751  374798 pod_ready.go:81] duration metric: took 4.967827ms waiting for pod "coredns-64897985d-l5jxv" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:08.265759  374798 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:08.265807  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-20220602173558-283122
	I0602 17:37:08.265815  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.265822  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.265831  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.267593  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:37:08.267612  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.267619  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.267625  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.267630  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.267635  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.267640  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.267645  374798 round_trippers.go:580]     Audit-Id: 62a2831b-ba43-4391-8da8-5c49083402e7
	I0602 17:37:08.267829  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220602173558-283122","namespace":"kube-system","uid":"2de3dc57-6748-4c20-bf65-3b2cbd2f8a0f","resourceVersion":"331","creationTimestamp":"2022-06-02T17:36:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"de8e89a254ff48460c714894f4297613","kubernetes.io/config.mirror":"de8e89a254ff48460c714894f4297613","kubernetes.io/config.seen":"2022-06-02T17:36:20.948851860Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:21Z","fieldsType":"FieldsV1","
fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes. [truncated 5804 chars]
	I0602 17:37:08.268331  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:37:08.268354  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.268368  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.268382  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.270115  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:37:08.270133  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.270140  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.270146  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.270154  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.270162  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.270176  374798 round_trippers.go:580]     Audit-Id: 5b5cd77d-859c-4405-93fc-976fb6dc8159
	I0602 17:37:08.270187  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.270380  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"516","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5073 chars]
	I0602 17:37:08.270770  374798 pod_ready.go:92] pod "etcd-multinode-20220602173558-283122" in "kube-system" namespace has status "Ready":"True"
	I0602 17:37:08.270790  374798 pod_ready.go:81] duration metric: took 5.020258ms waiting for pod "etcd-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:08.270811  374798 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:08.270872  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220602173558-283122
	I0602 17:37:08.270885  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.270898  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.270912  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.272630  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:37:08.272648  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.272655  374798 round_trippers.go:580]     Audit-Id: 99a652a3-ed30-4ccd-9cab-300b5ff35f5c
	I0602 17:37:08.272661  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.272669  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.272677  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.272691  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.272712  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.272889  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220602173558-283122","namespace":"kube-system","uid":"999b4342-ad8c-46aa-a5a0-bdd14089e393","resourceVersion":"326","creationTimestamp":"2022-06-02T17:36:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"9b88133a9e18b6ec7d53499c3c2debcc","kubernetes.io/config.mirror":"9b88133a9e18b6ec7d53499c3c2debcc","kubernetes.io/config.seen":"2022-06-02T17:36:13.934512571Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:20Z
","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{". [truncated 8313 chars]
	I0602 17:37:08.273451  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:37:08.273471  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.273484  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.273495  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.275046  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:37:08.275065  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.275074  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.275081  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.275090  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.275101  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.275113  374798 round_trippers.go:580]     Audit-Id: d8d2e0ce-89ee-45c6-b490-38b68d32be77
	I0602 17:37:08.275123  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.275213  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"516","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5073 chars]
	I0602 17:37:08.275484  374798 pod_ready.go:92] pod "kube-apiserver-multinode-20220602173558-283122" in "kube-system" namespace has status "Ready":"True"
	I0602 17:37:08.275498  374798 pod_ready.go:81] duration metric: took 4.673098ms waiting for pod "kube-apiserver-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:08.275509  374798 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:08.275555  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220602173558-283122
	I0602 17:37:08.275565  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.275576  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.275590  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.277350  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:37:08.277372  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.277382  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.277392  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.277405  374798 round_trippers.go:580]     Audit-Id: fb417878-9737-4b4f-9ddb-7d407bef73f0
	I0602 17:37:08.277466  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.277484  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.277493  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.277595  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220602173558-283122","namespace":"kube-system","uid":"dc0ed8b1-4d22-46e1-a708-fee470e6c6fe","resourceVersion":"330","creationTimestamp":"2022-06-02T17:36:21Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"73385622a1da8f9c02cb3f38e98edff7","kubernetes.io/config.mirror":"73385622a1da8f9c02cb3f38e98edff7","kubernetes.io/config.seen":"2022-06-02T17:36:20.948871153Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":
{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror [truncated 7888 chars]
	I0602 17:37:08.278001  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:37:08.278016  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.278023  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.278029  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.279558  374798 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0602 17:37:08.279579  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.279590  374798 round_trippers.go:580]     Audit-Id: d3be00fb-a90e-4034-95bf-55c181c6813c
	I0602 17:37:08.279600  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.279610  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.279628  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.279643  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.279672  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.279770  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"516","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5073 chars]
	I0602 17:37:08.280068  374798 pod_ready.go:92] pod "kube-controller-manager-multinode-20220602173558-283122" in "kube-system" namespace has status "Ready":"True"
	I0602 17:37:08.280082  374798 pod_ready.go:81] duration metric: took 4.565089ms waiting for pod "kube-controller-manager-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:08.280091  374798 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kz8ts" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:08.452506  374798 request.go:533] Waited for 172.33901ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kz8ts
	I0602 17:37:08.452567  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kz8ts
	I0602 17:37:08.452572  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.452581  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.452588  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.455087  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:08.455111  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.455120  374798 round_trippers.go:580]     Audit-Id: 1c750384-0ad7-48b3-9326-a7f03f9e7f7a
	I0602 17:37:08.455128  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.455137  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.455145  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.455154  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.455162  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.455272  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kz8ts","generateName":"kube-proxy-","namespace":"kube-system","uid":"37731fc3-69e5-4170-a1d9-d3878e1acf0a","resourceVersion":"561","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"controller-revision-hash":"549f7469d9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fdf29778-5a78-41df-8413-a6f3417a1d56","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fdf29778-5a78-41df-8413-a6f3417a1d56\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5552 chars]
	I0602 17:37:08.652128  374798 request.go:533] Waited for 196.382187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:08.652198  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122-m02
	I0602 17:37:08.652203  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.652211  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.652221  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.654564  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:08.654590  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.654601  374798 round_trippers.go:580]     Audit-Id: 19ae874e-ecc8-4c3c-8e08-152474f44846
	I0602 17:37:08.654610  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.654619  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.654625  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.654633  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.654647  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.654738  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122-m02","uid":"0bddfc25-80f3-4cd5-a841-dabb30cbdb66","resourceVersion":"576","creationTimestamp":"2022-06-02T17:36:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":
"v1","time":"2022-06-02T17:36:57Z","fieldsType":"FieldsV1","fieldsV1":{ [truncated 4239 chars]
	I0602 17:37:08.655063  374798 pod_ready.go:92] pod "kube-proxy-kz8ts" in "kube-system" namespace has status "Ready":"True"
	I0602 17:37:08.655080  374798 pod_ready.go:81] duration metric: took 374.982729ms waiting for pod "kube-proxy-kz8ts" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:08.655089  374798 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q8c4p" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:08.852537  374798 request.go:533] Waited for 197.354151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8c4p
	I0602 17:37:08.852598  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-q8c4p
	I0602 17:37:08.852603  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:08.852612  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:08.852619  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:08.855212  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:08.855235  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:08.855243  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:08.855249  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:08.855254  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:08 GMT
	I0602 17:37:08.855259  374798 round_trippers.go:580]     Audit-Id: c632b147-e045-4e43-8f9f-1a9c8f40210c
	I0602 17:37:08.855265  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:08.855273  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:08.855424  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-q8c4p","generateName":"kube-proxy-","namespace":"kube-system","uid":"f1878b35-b1dd-4c80-b1c8-6848ceeac02c","resourceVersion":"475","creationTimestamp":"2022-06-02T17:36:33Z","labels":{"controller-revision-hash":"549f7469d9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"fdf29778-5a78-41df-8413-a6f3417a1d56","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fdf29778-5a78-41df-8413-a6f3417a1d56\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5544 chars]
	I0602 17:37:09.052282  374798 request.go:533] Waited for 196.364931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:37:09.052343  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:37:09.052348  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:09.052357  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:09.052367  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:09.054922  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:09.054952  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:09.054964  374798 round_trippers.go:580]     Audit-Id: 8c28b2a8-71af-4e3a-b39d-232a0bbe6016
	I0602 17:37:09.054973  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:09.054981  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:09.054990  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:09.055004  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:09.055017  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:09 GMT
	I0602 17:37:09.055156  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"516","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5073 chars]
	I0602 17:37:09.055495  374798 pod_ready.go:92] pod "kube-proxy-q8c4p" in "kube-system" namespace has status "Ready":"True"
	I0602 17:37:09.055511  374798 pod_ready.go:81] duration metric: took 400.416351ms waiting for pod "kube-proxy-q8c4p" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:09.055520  374798 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:09.251962  374798 request.go:533] Waited for 196.341866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220602173558-283122
	I0602 17:37:09.252024  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220602173558-283122
	I0602 17:37:09.252029  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:09.252038  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:09.252044  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:09.254674  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:09.254716  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:09.254725  374798 round_trippers.go:580]     Audit-Id: 09506928-08c1-435b-96c8-43d8f84b9678
	I0602 17:37:09.254731  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:09.254736  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:09.254742  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:09.254747  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:09.254753  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:09 GMT
	I0602 17:37:09.254861  374798 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220602173558-283122","namespace":"kube-system","uid":"b207e4b1-a64d-4aaf-bd7b-5eaec8e23004","resourceVersion":"366","creationTimestamp":"2022-06-02T17:36:21Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"16c53ed8f606fa43c821fa27956bef6a","kubernetes.io/config.mirror":"16c53ed8f606fa43c821fa27956bef6a","kubernetes.io/config.seen":"2022-06-02T17:36:20.948872903Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-06-02T17:36:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kuberne
tes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes [truncated 4770 chars]
	I0602 17:37:09.452625  374798 request.go:533] Waited for 197.345309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:37:09.452761  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/multinode-20220602173558-283122
	I0602 17:37:09.452769  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:09.452782  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:09.452804  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:09.455086  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:09.455109  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:09.455120  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:09.455126  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:09 GMT
	I0602 17:37:09.455131  374798 round_trippers.go:580]     Audit-Id: 4a9b01e3-7d08-4f5a-84f5-ee376bad623f
	I0602 17:37:09.455137  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:09.455142  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:09.455149  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:09.455313  374798 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"516","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach
-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd [truncated 5073 chars]
	I0602 17:37:09.455810  374798 pod_ready.go:92] pod "kube-scheduler-multinode-20220602173558-283122" in "kube-system" namespace has status "Ready":"True"
	I0602 17:37:09.455829  374798 pod_ready.go:81] duration metric: took 400.302223ms waiting for pod "kube-scheduler-multinode-20220602173558-283122" in "kube-system" namespace to be "Ready" ...
	I0602 17:37:09.455847  374798 pod_ready.go:38] duration metric: took 1.201231114s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0602 17:37:09.455877  374798 system_svc.go:44] waiting for kubelet service to be running ....
	I0602 17:37:09.455925  374798 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 17:37:09.466312  374798 system_svc.go:56] duration metric: took 10.426082ms WaitForService to wait for kubelet.
	I0602 17:37:09.466346  374798 kubeadm.go:572] duration metric: took 6.747792517s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0602 17:37:09.466374  374798 node_conditions.go:102] verifying NodePressure condition ...
	I0602 17:37:09.651734  374798 request.go:533] Waited for 185.265956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0602 17:37:09.651801  374798 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0602 17:37:09.651806  374798 round_trippers.go:469] Request Headers:
	I0602 17:37:09.651814  374798 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0602 17:37:09.651821  374798 round_trippers.go:473]     Accept: application/json, */*
	I0602 17:37:09.654233  374798 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0602 17:37:09.654265  374798 round_trippers.go:577] Response Headers:
	I0602 17:37:09.654276  374798 round_trippers.go:580]     Date: Thu, 02 Jun 2022 17:37:09 GMT
	I0602 17:37:09.654285  374798 round_trippers.go:580]     Audit-Id: 3a441877-2ef4-4935-89a9-9553ffd6f9b2
	I0602 17:37:09.654294  374798 round_trippers.go:580]     Cache-Control: no-cache, private
	I0602 17:37:09.654304  374798 round_trippers.go:580]     Content-Type: application/json
	I0602 17:37:09.654314  374798 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9a6ba702-6e64-490c-99be-e7c63c5efdfb
	I0602 17:37:09.654320  374798 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 82cf39e6-45d8-4b48-8c92-44add7df2a49
	I0602 17:37:09.654442  374798 request.go:1073] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"578"},"items":[{"metadata":{"name":"multinode-20220602173558-283122","uid":"7ffe6f02-47c1-40c0-844c-3aa4c212c972","resourceVersion":"516","creationTimestamp":"2022-06-02T17:36:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220602173558-283122","kubernetes.io/os":"linux","minikube.k8s.io/commit":"408dc4036f5a6d8b1313a2031b5dcb646a720fae","minikube.k8s.io/name":"multinode-20220602173558-283122","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_06_02T17_36_22_0700","minikube.k8s.io/version":"v1.26.0-beta.1","node-role.kubernetes.io/control-plane":"","node-role.kubernetes.io/master":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock","node.alpha.kubernetes.io/ttl":"
0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"ma [truncated 10357 chars]
	I0602 17:37:09.654898  374798 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0602 17:37:09.654914  374798 node_conditions.go:123] node cpu capacity is 8
	I0602 17:37:09.654924  374798 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0602 17:37:09.654927  374798 node_conditions.go:123] node cpu capacity is 8
	I0602 17:37:09.654931  374798 node_conditions.go:105] duration metric: took 188.551949ms to run NodePressure ...
	I0602 17:37:09.654949  374798 start.go:213] waiting for startup goroutines ...
	I0602 17:37:09.693503  374798 start.go:504] kubectl: 1.24.1, cluster: 1.23.6 (minor skew: 1)
	I0602 17:37:09.697481  374798 out.go:177] * Done! kubectl is now configured to use "multinode-20220602173558-283122" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-06-02 17:36:06 UTC, end at Thu 2022-06-02 17:45:19 UTC. --
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[254]: time="2022-06-02T17:36:08.472332999Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jun 02 17:36:08 multinode-20220602173558-283122 systemd[1]: docker.service: Succeeded.
	Jun 02 17:36:08 multinode-20220602173558-283122 systemd[1]: Stopped Docker Application Container Engine.
	Jun 02 17:36:08 multinode-20220602173558-283122 systemd[1]: Starting Docker Application Container Engine...
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.516267231Z" level=info msg="Starting up"
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.518288581Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.518316302Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.518339313Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.518349035Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.520225053Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.520253774Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.520273528Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.520287190Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.526492223Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.530989753Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.531014520Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.531019923Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.531169527Z" level=info msg="Loading containers: start."
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.612687282Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.647617431Z" level=info msg="Loading containers: done."
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.658821038Z" level=info msg="Docker daemon" commit=f756502 graphdriver(s)=overlay2 version=20.10.16
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.658887309Z" level=info msg="Daemon has completed initialization"
	Jun 02 17:36:08 multinode-20220602173558-283122 systemd[1]: Started Docker Application Container Engine.
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.676600703Z" level=info msg="API listen on [::]:2376"
	Jun 02 17:36:08 multinode-20220602173558-283122 dockerd[493]: time="2022-06-02T17:36:08.679907817Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	d39c455a82b66       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   8 minutes ago       Running             busybox                   0                   cff79d290c693
	e54a6202cdf67       a4ca41631cc7a                                                                                         8 minutes ago       Running             coredns                   0                   51fda5ea13233
	1a82c523cdd37       6e38f40d628db                                                                                         8 minutes ago       Running             storage-provisioner       0                   de0a268075989
	9ab48ef578bc9       kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c              8 minutes ago       Running             kindnet-cni               0                   68604fb5b9b16
	0612b1336d0e9       4c03754524064                                                                                         8 minutes ago       Running             kube-proxy                0                   8b348b20026be
	fe7106c1507c2       df7b72818ad2e                                                                                         9 minutes ago       Running             kube-controller-manager   0                   6dd325a1bbbf2
	905dd1d3f937c       25f8c7f3da61c                                                                                         9 minutes ago       Running             etcd                      0                   503ddb8d4e075
	9c2cc422d2c1b       8fa62c12256df                                                                                         9 minutes ago       Running             kube-apiserver            0                   8d21b199ac971
	b537ad213767e       595f327f224a4                                                                                         9 minutes ago       Running             kube-scheduler            0                   e82a0aa83c92e
	
	* 
	* ==> coredns [e54a6202cdf6] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-20220602173558-283122
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20220602173558-283122
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae
	                    minikube.k8s.io/name=multinode-20220602173558-283122
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_02T17_36_22_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Jun 2022 17:36:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20220602173558-283122
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Jun 2022 17:45:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Jun 2022 17:42:26 +0000   Thu, 02 Jun 2022 17:36:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Jun 2022 17:42:26 +0000   Thu, 02 Jun 2022 17:36:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Jun 2022 17:42:26 +0000   Thu, 02 Jun 2022 17:36:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Jun 2022 17:42:26 +0000   Thu, 02 Jun 2022 17:36:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    multinode-20220602173558-283122
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873816Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873816Ki
	  pods:               110
	System Info:
	  Machine ID:                 a34bb2508bce429bb90502b0ef044420
	  System UUID:                53d269d0-8245-41e8-ac6d-0e3bcce49ad2
	  Boot ID:                    eac629ea-39e3-4b75-b891-94bd750a4fe6
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7978565885-2cv69                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m9s
	  kube-system                 coredns-64897985d-l5jxv                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     8m46s
	  kube-system                 etcd-multinode-20220602173558-283122                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         8m58s
	  kube-system                 kindnet-d4jwl                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m46s
	  kube-system                 kube-apiserver-multinode-20220602173558-283122             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m59s
	  kube-system                 kube-controller-manager-multinode-20220602173558-283122    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m58s
	  kube-system                 kube-proxy-q8c4p                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m46s
	  kube-system                 kube-scheduler-multinode-20220602173558-283122             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m58s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 8m44s                kube-proxy  
	  Normal  Starting                 9m6s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m5s (x5 over 9m5s)  kubelet     Node multinode-20220602173558-283122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m5s (x5 over 9m5s)  kubelet     Node multinode-20220602173558-283122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m5s (x4 over 9m5s)  kubelet     Node multinode-20220602173558-283122 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m5s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 8m59s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m58s                kubelet     Node multinode-20220602173558-283122 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m58s                kubelet     Node multinode-20220602173558-283122 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m58s                kubelet     Node multinode-20220602173558-283122 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m58s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m38s                kubelet     Node multinode-20220602173558-283122 status is now: NodeReady
	
	
	Name:               multinode-20220602173558-283122-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20220602173558-283122-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Jun 2022 17:36:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20220602173558-283122-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Jun 2022 17:45:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Jun 2022 17:42:34 +0000   Thu, 02 Jun 2022 17:36:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Jun 2022 17:42:34 +0000   Thu, 02 Jun 2022 17:36:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Jun 2022 17:42:34 +0000   Thu, 02 Jun 2022 17:36:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Jun 2022 17:42:34 +0000   Thu, 02 Jun 2022 17:37:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    multinode-20220602173558-283122-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873816Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873816Ki
	  pods:               110
	System Info:
	  Machine ID:                 a34bb2508bce429bb90502b0ef044420
	  System UUID:                33a7e218-bf68-456c-9c38-8eaf0595363e
	  Boot ID:                    eac629ea-39e3-4b75-b891-94bd750a4fe6
	  Kernel Version:             5.13.0-1027-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7978565885-tq8p2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m9s
	  kube-system                 kindnet-dkv9b               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m22s
	  kube-system                 kube-proxy-kz8ts            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 8m19s                  kube-proxy  
	  Normal  Starting                 8m22s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m22s (x2 over 8m22s)  kubelet     Node multinode-20220602173558-283122-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m22s (x2 over 8m22s)  kubelet     Node multinode-20220602173558-283122-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m22s (x2 over 8m22s)  kubelet     Node multinode-20220602173558-283122-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m22s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m12s                  kubelet     Node multinode-20220602173558-283122-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	[  +5.403841] IPv4: martian source 10.244.0.133 from 10.244.1.2, on dev br-e61542ad8e34
	[  +0.000006] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	[  +5.002477] IPv4: martian source 10.244.0.133 from 10.244.1.2, on dev br-e61542ad8e34
	[  +0.000007] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	[  +5.004766] IPv4: martian source 10.244.0.133 from 10.244.1.2, on dev br-e61542ad8e34
	[  +0.000006] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	[  +5.002078] IPv4: martian source 10.244.0.133 from 10.244.1.2, on dev br-e61542ad8e34
	[  +0.000007] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	[  +5.003580] IPv4: martian source 10.244.0.133 from 10.244.1.2, on dev br-e61542ad8e34
	[  +0.000007] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	[  +5.004726] IPv4: martian source 10.244.0.133 from 10.244.1.2, on dev br-e61542ad8e34
	[  +0.000007] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	[  +5.004138] IPv4: martian source 10.244.0.133 from 10.244.1.2, on dev br-e61542ad8e34
	[  +0.000005] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	[  +5.004774] IPv4: martian source 10.244.0.133 from 10.244.1.2, on dev br-e61542ad8e34
	[  +0.000006] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	[  +5.004476] IPv4: martian source 10.244.0.133 from 10.244.1.2, on dev br-e61542ad8e34
	[  +0.000006] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	[Jun 2 17:45] IPv4: martian source 10.244.0.133 from 10.244.1.2, on dev br-e61542ad8e34
	[  +0.000007] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	[  +5.002238] IPv4: martian source 10.244.0.133 from 10.244.1.2, on dev br-e61542ad8e34
	[  +0.000007] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	[  +5.004737] IPv4: martian source 10.244.0.133 from 10.244.1.2, on dev br-e61542ad8e34
	[  +0.000007] ll header: 00000000: 02 42 d7 bc 38 84 02 42 c0 a8 31 02 08 00
	
	* 
	* ==> etcd [905dd1d3f937] <==
	* {"level":"info","ts":"2022-06-02T17:36:15.248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-06-02T17:36:15.249Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-06-02T17:36:15.254Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-02T17:36:15.255Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-02T17:36:15.255Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-06-02T17:36:15.255Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-02T17:36:15.255Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-02T17:36:15.641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-06-02T17:36:15.641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-06-02T17:36:15.641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-06-02T17:36:15.641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-06-02T17:36:15.641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-02T17:36:15.641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-06-02T17:36:15.641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-06-02T17:36:15.641Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:36:15.642Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:36:15.642Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:36:15.642Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-02T17:36:15.643Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:multinode-20220602173558-283122 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-02T17:36:15.643Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T17:36:15.643Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-02T17:36:15.643Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-02T17:36:15.643Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-06-02T17:36:15.644Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-02T17:36:15.644Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	
	* 
	* ==> kernel <==
	*  17:45:19 up  2:27,  0 users,  load average: 0.15, 0.47, 0.69
	Linux multinode-20220602173558-283122 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [9c2cc422d2c1] <==
	* I0602 17:36:17.934019       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0602 17:36:17.934078       1 cache.go:39] Caches are synced for autoregister controller
	I0602 17:36:17.934115       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0602 17:36:17.934094       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0602 17:36:17.934420       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0602 17:36:17.934519       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0602 17:36:18.792442       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0602 17:36:18.792476       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0602 17:36:18.798623       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0602 17:36:18.801835       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0602 17:36:18.801857       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0602 17:36:19.185881       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0602 17:36:19.214578       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0602 17:36:19.361853       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0602 17:36:19.366878       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0602 17:36:19.367911       1 controller.go:611] quota admission added evaluator for: endpoints
	I0602 17:36:19.371843       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0602 17:36:19.917469       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0602 17:36:20.782557       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0602 17:36:20.789092       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0602 17:36:20.801697       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0602 17:36:33.523296       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0602 17:36:33.623777       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0602 17:36:33.623777       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0602 17:36:34.682189       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [fe7106c1507c] <==
	* I0602 17:36:32.915128       1 shared_informer.go:247] Caches are synced for job 
	I0602 17:36:32.921098       1 shared_informer.go:247] Caches are synced for cronjob 
	I0602 17:36:32.973722       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 17:36:32.975515       1 shared_informer.go:247] Caches are synced for resource quota 
	I0602 17:36:33.398678       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 17:36:33.469355       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0602 17:36:33.469385       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0602 17:36:33.525483       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0602 17:36:33.539367       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0602 17:36:33.629403       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q8c4p"
	I0602 17:36:33.631456       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-d4jwl"
	I0602 17:36:33.775146       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-dqpcc"
	I0602 17:36:33.782710       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-l5jxv"
	I0602 17:36:33.803528       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-dqpcc"
	I0602 17:36:42.721991       1 node_lifecycle_controller.go:1190] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	W0602 17:36:57.589057       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20220602173558-283122-m02" does not exist
	I0602 17:36:57.594847       1 range_allocator.go:374] Set node multinode-20220602173558-283122-m02 PodCIDR to [10.244.1.0/24]
	I0602 17:36:57.598752       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-kz8ts"
	I0602 17:36:57.598779       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-dkv9b"
	W0602 17:36:57.724114       1 node_lifecycle_controller.go:1012] Missing timestamp for Node multinode-20220602173558-283122-m02. Assuming now as a timestamp.
	I0602 17:36:57.724149       1 event.go:294] "Event occurred" object="multinode-20220602173558-283122-m02" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20220602173558-283122-m02 event: Registered Node multinode-20220602173558-283122-m02 in Controller"
	I0602 17:37:10.535885       1 event.go:294] "Event occurred" object="default/busybox" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7978565885 to 2"
	I0602 17:37:10.541255       1 event.go:294] "Event occurred" object="default/busybox-7978565885" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7978565885-tq8p2"
	I0602 17:37:10.544352       1 event.go:294] "Event occurred" object="default/busybox-7978565885" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7978565885-2cv69"
	I0602 17:37:12.733343       1 event.go:294] "Event occurred" object="default/busybox-7978565885-tq8p2" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7978565885-tq8p2"
	
	* 
	* ==> kube-proxy [0612b1336d0e] <==
	* I0602 17:36:34.655981       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0602 17:36:34.656062       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0602 17:36:34.656099       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0602 17:36:34.678720       1 server_others.go:206] "Using iptables Proxier"
	I0602 17:36:34.678758       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0602 17:36:34.678766       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0602 17:36:34.678784       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0602 17:36:34.679241       1 server.go:656] "Version info" version="v1.23.6"
	I0602 17:36:34.679850       1 config.go:317] "Starting service config controller"
	I0602 17:36:34.679850       1 config.go:226] "Starting endpoint slice config controller"
	I0602 17:36:34.679890       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0602 17:36:34.679890       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0602 17:36:34.780156       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0602 17:36:34.780174       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [b537ad213767] <==
	* W0602 17:36:17.854407       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0602 17:36:17.854478       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0602 17:36:17.854780       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0602 17:36:17.854992       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0602 17:36:17.855049       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0602 17:36:17.854797       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0602 17:36:17.855090       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0602 17:36:17.854808       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0602 17:36:17.855110       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0602 17:36:17.854913       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0602 17:36:17.855136       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0602 17:36:17.854926       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0602 17:36:17.855150       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0602 17:36:17.854971       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0602 17:36:17.855170       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0602 17:36:17.855029       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0602 17:36:18.771151       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0602 17:36:18.771199       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0602 17:36:18.792668       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0602 17:36:18.792708       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0602 17:36:18.979071       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0602 17:36:18.979110       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0602 17:36:19.038351       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0602 17:36:19.038394       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0602 17:36:21.451696       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-06-02 17:36:06 UTC, end at Thu 2022-06-02 17:45:19 UTC. --
	Jun 02 17:36:32 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:32.840622    1929 docker_service.go:364] "Docker cri received runtime config" runtimeConfig="&RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Jun 02 17:36:32 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:32.840812    1929 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 02 17:36:32 multinode-20220602173558-283122 kubelet[1929]: E0602 17:36:32.848474    1929 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Jun 02 17:36:33 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:33.633961    1929 topology_manager.go:200] "Topology Admit Handler"
	Jun 02 17:36:33 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:33.637381    1929 topology_manager.go:200] "Topology Admit Handler"
	Jun 02 17:36:33 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:33.648555    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1878b35-b1dd-4c80-b1c8-6848ceeac02c-xtables-lock\") pod \"kube-proxy-q8c4p\" (UID: \"f1878b35-b1dd-4c80-b1c8-6848ceeac02c\") " pod="kube-system/kube-proxy-q8c4p"
	Jun 02 17:36:33 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:33.648599    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1878b35-b1dd-4c80-b1c8-6848ceeac02c-lib-modules\") pod \"kube-proxy-q8c4p\" (UID: \"f1878b35-b1dd-4c80-b1c8-6848ceeac02c\") " pod="kube-system/kube-proxy-q8c4p"
	Jun 02 17:36:33 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:33.648625    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbmkl\" (UniqueName: \"kubernetes.io/projected/f1878b35-b1dd-4c80-b1c8-6848ceeac02c-kube-api-access-zbmkl\") pod \"kube-proxy-q8c4p\" (UID: \"f1878b35-b1dd-4c80-b1c8-6848ceeac02c\") " pod="kube-system/kube-proxy-q8c4p"
	Jun 02 17:36:33 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:33.648647    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/02c02672-9134-4bb8-abdb-c15c1f3334ac-cni-cfg\") pod \"kindnet-d4jwl\" (UID: \"02c02672-9134-4bb8-abdb-c15c1f3334ac\") " pod="kube-system/kindnet-d4jwl"
	Jun 02 17:36:33 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:33.648665    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f1878b35-b1dd-4c80-b1c8-6848ceeac02c-kube-proxy\") pod \"kube-proxy-q8c4p\" (UID: \"f1878b35-b1dd-4c80-b1c8-6848ceeac02c\") " pod="kube-system/kube-proxy-q8c4p"
	Jun 02 17:36:33 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:33.648710    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf7kl\" (UniqueName: \"kubernetes.io/projected/02c02672-9134-4bb8-abdb-c15c1f3334ac-kube-api-access-rf7kl\") pod \"kindnet-d4jwl\" (UID: \"02c02672-9134-4bb8-abdb-c15c1f3334ac\") " pod="kube-system/kindnet-d4jwl"
	Jun 02 17:36:33 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:33.648793    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02c02672-9134-4bb8-abdb-c15c1f3334ac-xtables-lock\") pod \"kindnet-d4jwl\" (UID: \"02c02672-9134-4bb8-abdb-c15c1f3334ac\") " pod="kube-system/kindnet-d4jwl"
	Jun 02 17:36:33 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:33.648836    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02c02672-9134-4bb8-abdb-c15c1f3334ac-lib-modules\") pod \"kindnet-d4jwl\" (UID: \"02c02672-9134-4bb8-abdb-c15c1f3334ac\") " pod="kube-system/kindnet-d4jwl"
	Jun 02 17:36:34 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:34.278505    1929 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="8b348b20026befa3c552f31c8ab97fdc1e10e628c4b4e41c29231be6ada84e9f"
	Jun 02 17:36:34 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:34.558284    1929 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="68604fb5b9b164523ba623d98dd04fda3282413a13be677be9364b38ba1a32d0"
	Jun 02 17:36:35 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:35.933977    1929 cni.go:240] "Unable to update cni config" err="no networks found in /etc/cni/net.mk"
	Jun 02 17:36:36 multinode-20220602173558-283122 kubelet[1929]: E0602 17:36:36.344191    1929 kubelet.go:2386] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Jun 02 17:36:41 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:41.542745    1929 topology_manager.go:200] "Topology Admit Handler"
	Jun 02 17:36:41 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:41.543112    1929 topology_manager.go:200] "Topology Admit Handler"
	Jun 02 17:36:41 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:41.593727    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d796da5e-d4e3-4761-84e2-c742ea94211a-config-volume\") pod \"coredns-64897985d-l5jxv\" (UID: \"d796da5e-d4e3-4761-84e2-c742ea94211a\") " pod="kube-system/coredns-64897985d-l5jxv"
	Jun 02 17:36:41 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:41.593788    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8974k\" (UniqueName: \"kubernetes.io/projected/d796da5e-d4e3-4761-84e2-c742ea94211a-kube-api-access-8974k\") pod \"coredns-64897985d-l5jxv\" (UID: \"d796da5e-d4e3-4761-84e2-c742ea94211a\") " pod="kube-system/coredns-64897985d-l5jxv"
	Jun 02 17:36:41 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:41.593906    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8a59fe44-28d0-431f-9a48-d4f0705d7d5a-tmp\") pod \"storage-provisioner\" (UID: \"8a59fe44-28d0-431f-9a48-d4f0705d7d5a\") " pod="kube-system/storage-provisioner"
	Jun 02 17:36:41 multinode-20220602173558-283122 kubelet[1929]: I0602 17:36:41.593963    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqf5l\" (UniqueName: \"kubernetes.io/projected/8a59fe44-28d0-431f-9a48-d4f0705d7d5a-kube-api-access-wqf5l\") pod \"storage-provisioner\" (UID: \"8a59fe44-28d0-431f-9a48-d4f0705d7d5a\") " pod="kube-system/storage-provisioner"
	Jun 02 17:37:10 multinode-20220602173558-283122 kubelet[1929]: I0602 17:37:10.552847    1929 topology_manager.go:200] "Topology Admit Handler"
	Jun 02 17:37:10 multinode-20220602173558-283122 kubelet[1929]: I0602 17:37:10.575008    1929 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zms5\" (UniqueName: \"kubernetes.io/projected/6c6aa109-c4c8-4606-b0a2-e2321da8d918-kube-api-access-7zms5\") pod \"busybox-7978565885-2cv69\" (UID: \"6c6aa109-c4c8-4606-b0a2-e2321da8d918\") " pod="default/busybox-7978565885-2cv69"
	
	* 
	* ==> storage-provisioner [1a82c523cdd3] <==
	* I0602 17:36:42.170493       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0602 17:36:42.179486       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0602 17:36:42.179533       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0602 17:36:42.237252       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0602 17:36:42.237454       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20220602173558-283122_88142c70-60d4-455b-abae-84f21c293b56!
	I0602 17:36:42.237453       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"87375484-2cf5-4b00-ab06-bb73cfde4992", APIVersion:"v1", ResourceVersion:"497", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20220602173558-283122_88142c70-60d4-455b-abae-84f21c293b56 became leader
	I0602 17:36:42.338004       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20220602173558-283122_88142c70-60d4-455b-abae-84f21c293b56!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-20220602173558-283122 -n multinode-20220602173558-283122
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-20220602173558-283122 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context multinode-20220602173558-283122 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context multinode-20220602173558-283122 describe pod : exit status 1 (42.754784ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context multinode-20220602173558-283122 describe pod : exit status 1
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (123.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (13.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220602180121-283122 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-different-port-20220602180121-283122 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6: exit status 80 (13.246878311s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20220602180121-283122] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14269
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	* Starting control plane node default-k8s-different-port-20220602180121-283122 in cluster default-k8s-different-port-20220602180121-283122
	* Pulling base image ...
	* Restarting existing docker container for "default-k8s-different-port-20220602180121-283122" ...
	* Restarting existing docker container for "default-k8s-different-port-20220602180121-283122" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 18:06:31.567184  569614 out.go:296] Setting OutFile to fd 1 ...
	I0602 18:06:31.567369  569614 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 18:06:31.567379  569614 out.go:309] Setting ErrFile to fd 2...
	I0602 18:06:31.567384  569614 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 18:06:31.567504  569614 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 18:06:31.567757  569614 out.go:303] Setting JSON to false
	I0602 18:06:31.569424  569614 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":10145,"bootTime":1654183047,"procs":635,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0602 18:06:31.569507  569614 start.go:125] virtualization: kvm guest
	I0602 18:06:31.572447  569614 out.go:177] * [default-k8s-different-port-20220602180121-283122] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0602 18:06:31.574282  569614 notify.go:193] Checking for updates...
	I0602 18:06:31.575972  569614 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 18:06:31.577823  569614 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 18:06:31.579459  569614 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 18:06:31.581353  569614 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 18:06:31.583045  569614 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0602 18:06:31.585358  569614 config.go:178] Loaded profile config "default-k8s-different-port-20220602180121-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 18:06:31.585889  569614 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 18:06:31.631053  569614 docker.go:137] docker version: linux-20.10.16
	I0602 18:06:31.631157  569614 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 18:06:31.751239  569614 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:71 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:46 SystemTime:2022-06-02 18:06:31.664029117 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 18:06:31.751482  569614 docker.go:254] overlay module found
	I0602 18:06:31.754386  569614 out.go:177] * Using the docker driver based on existing profile
	I0602 18:06:31.755941  569614 start.go:284] selected driver: docker
	I0602 18:06:31.755968  569614 start.go:806] validating driver "docker" against &{Name:default-k8s-different-port-20220602180121-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-por
t-20220602180121-283122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:t
rue] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 18:06:31.756124  569614 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 18:06:31.757414  569614 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 18:06:31.877468  569614 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:71 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-02 18:06:31.792809118 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 18:06:31.877740  569614 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0602 18:06:31.877770  569614 cni.go:95] Creating CNI manager for ""
	I0602 18:06:31.877784  569614 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 18:06:31.877803  569614 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220602180121-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:default-k8s-different-port-20220602180121-283122 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Ne
twork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 18:06:31.880155  569614 out.go:177] * Starting control plane node default-k8s-different-port-20220602180121-283122 in cluster default-k8s-different-port-20220602180121-283122
	I0602 18:06:31.881674  569614 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 18:06:31.883270  569614 out.go:177] * Pulling base image ...
	I0602 18:06:31.884733  569614 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 18:06:31.884794  569614 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 18:06:31.884809  569614 cache.go:57] Caching tarball of preloaded images
	I0602 18:06:31.884850  569614 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 18:06:31.885114  569614 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 18:06:31.885138  569614 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 18:06:31.885326  569614 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602180121-283122/config.json ...
	I0602 18:06:31.938598  569614 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 18:06:31.938636  569614 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 18:06:31.938653  569614 cache.go:206] Successfully downloaded all kic artifacts
	I0602 18:06:31.938705  569614 start.go:352] acquiring machines lock for default-k8s-different-port-20220602180121-283122: {Name:mkdd968f3b4a154d336fd595e63c931cb4826e3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 18:06:31.938849  569614 start.go:356] acquired machines lock for "default-k8s-different-port-20220602180121-283122" in 111.968µs
	I0602 18:06:31.938882  569614 start.go:94] Skipping create...Using existing machine configuration
	I0602 18:06:31.938895  569614 fix.go:55] fixHost starting: 
	I0602 18:06:31.939256  569614 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220602180121-283122 --format={{.State.Status}}
	I0602 18:06:31.976570  569614 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220602180121-283122: state=Stopped err=<nil>
	W0602 18:06:31.976618  569614 fix.go:129] unexpected machine state, will restart: <nil>
	I0602 18:06:31.979278  569614 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220602180121-283122" ...
	I0602 18:06:31.980838  569614 cli_runner.go:164] Run: docker start default-k8s-different-port-20220602180121-283122
	W0602 18:06:32.034491  569614 cli_runner.go:211] docker start default-k8s-different-port-20220602180121-283122 returned with exit code 1
	I0602 18:06:32.034568  569614 cli_runner.go:164] Run: docker inspect default-k8s-different-port-20220602180121-283122
	I0602 18:06:32.070378  569614 errors.go:84] Postmortem inspect ("docker inspect default-k8s-different-port-20220602180121-283122"): -- stdout --
	[
	    {
	        "Id": "c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503",
	        "Created": "2022-06-02T18:01:28.65338102Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "network c9942ce21648740d1266dd81f85155715256e5c54e7d23ee69de43962d50f431 not found",
	            "StartedAt": "2022-06-02T18:01:29.039934789Z",
	            "FinishedAt": "2022-06-02T18:06:30.797930837Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/hostname",
	        "HostsPath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/hosts",
	        "LogPath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503-json.log",
	        "Name": "/default-k8s-different-port-20220602180121-283122",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220602180121-283122:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220602180121-283122",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48-init/diff:/var/lib/docker/overlay2/9517781e97ae29bbe1cf38381a601e806342524da1c5d5dc15ba54ec17f204b6/diff:/var/lib/docker/overlay2/28e14d8724bd82b9b6adde55b66e6d1f1be4fa6c3379f2f44ca1138dd648175a/diff:/var/lib/docker/overlay2/24589dfb9ddc941e7cb22b5e38dec69d1af0a29a330d344152ca7f26010354a8/diff:/var/lib/docker/overlay2/66c45d6273a3507b6e0b6b0753ec4fc82f07160fd115626e24a42bda27bbba86/diff:/var/lib/docker/overlay2/a6f0c661bd178a2a0fef034493a114067e8952b8516e69aff1e1e94a75918bbc/diff:/var/lib/docker/overlay2/5b40a045506372be84bfb01d05b068bc4cc926c5eb9e868226c4c386e8a331c7/diff:/var/lib/docker/overlay2/dbf9a9c6e59628984c95ea93ff4ada66c53b22ca3c2f910f70efba895cf40718/diff:/var/lib/docker/overlay2/5eea619393ee989a35a37a13b0633685af91729e1b74f6e73fd94b63c9d7d1fa/diff:/var/lib/docker/overlay2/24de6fbece5386c3658fa3e2c01723e6c1c24fb837d53b3d5a8c75b64ce2cd06/diff:/var/lib/docker/overlay2/04593e
f7d3bf72ee8b7439b0326af5b9818930e0dd48c35dba97e3b4ae39f4ee/diff:/var/lib/docker/overlay2/09310697cecb557085bb4d5dfc810a82fc533759ad98179263b2fc94153a64fa/diff:/var/lib/docker/overlay2/ee75c420f0615a19a5d44cfe042bdca5a2cf0f1c541168ba424140cf2aa7f98d/diff:/var/lib/docker/overlay2/fc8643a3e2060ab0365cd1227ab9ebbd5a573e6ebf8379bc6b64a756a1f66562/diff:/var/lib/docker/overlay2/0f8571cab87b35114fab495cbde9af8ac7e58be99412da82a79c90239e2227e1/diff:/var/lib/docker/overlay2/e1a5ae079e4f566081e5e786955257c23809c8719ea6b4593c8c91ac00380379/diff:/var/lib/docker/overlay2/373565e63dec12c85906e80b4a030f7d852992a756749be9424ac17802d589c0/diff:/var/lib/docker/overlay2/b886b42aa5e9f06b4381a6f58d55338774a2d02af2e91e3de156aa4bca4e4be0/diff:/var/lib/docker/overlay2/dee534a744ce54d08ca752b7f64e1772c063d4ba1e008c5a9773188bb009c377/diff:/var/lib/docker/overlay2/c8cf6667f29ffc5417675e4bd8eee6a91468de13292e3cb6ae2c70d3ad56d5fd/diff:/var/lib/docker/overlay2/072b57a4228c3928bdb27162809b102b485bb94c5f726c9a3e9892267ed30839/diff:/var/lib/d
ocker/overlay2/491d5b51bf6de1cfc4eef288c454f54e3b424e3b83fd9dc216c35891c7923f55/diff:/var/lib/docker/overlay2/78f167c2cc286e4058ef09d5a719437afddde854aac41b8d054567865fa61024/diff:/var/lib/docker/overlay2/10763b3ba26983ed9426ccca08bfceb7037d95c69827847e2ff755b4f2c5a4ab/diff:/var/lib/docker/overlay2/e559098efb21164d96bd0aee50fee9f48310df68887c5c884f4ba3f3fc895260/diff:/var/lib/docker/overlay2/555ef1505fe2b1ab4244f073e9212f269d70d8cd6ca0cd517d21cc80cd25b4c1/diff:/var/lib/docker/overlay2/0201da7c907a1f8f564cda48c3f3a403e484d901739d2584f1343a7907685c5e/diff:/var/lib/docker/overlay2/0abae5d84f8497a5114349421415431c43a34015ad98f6c9c3140143d2f1f383/diff:/var/lib/docker/overlay2/80c37982eef2dc00bd08f208551df377570134af84008156f6a500855fd2ba10/diff:/var/lib/docker/overlay2/cc38481bb11be97cd54c0032709337a491580c2f0ae6d3f7eec4c3616f8e3126/diff:/var/lib/docker/overlay2/dda978aeb4c7317bafbc077229c4417a4d8e7658d9604803c401448b471eab5e/diff:/var/lib/docker/overlay2/7c230ae0970689e2932cdaadc9e3f86ec8215fbf384c9eb59cfb958daf7
3197e/diff:/var/lib/docker/overlay2/145db51b169211e46c3bbab5ca998b2a019a4a813801acc2dae9b2dc09bc2ecc/diff:/var/lib/docker/overlay2/e604163672a013549c59bc042d1b932a3dad9e104d7079fbbb3ca24c29351c0d/diff:/var/lib/docker/overlay2/5a304d8680fd3fb594754173a70646923e2715ed21c98b8c43e1f1ef2d7abd6a/diff:/var/lib/docker/overlay2/b77f00351ddd70be2f3d9dd622b78e1a2937c4ba71585b357fef88d285eb762e/diff:/var/lib/docker/overlay2/196b6ae042b9b08748253c16f5e2b56d7f9a25077008d3d447cbc75447ed5e2a/diff:/var/lib/docker/overlay2/8c7c3fbb0bed8f7440d11104257806c70d16fd7d432311fc208d5012a439e5a1/diff:/var/lib/docker/overlay2/10604aa3b8ed5698c9fa8f55e00b9a9dc27d6b99135255d6c756fc080e615fc6/diff:/var/lib/docker/overlay2/c6602a17e9b0ab1d84297109204b48e60f293952fbc1feaf362895ec86860ead/diff:/var/lib/docker/overlay2/3828c5e50db9be2b4bc30fce040392ea735da5eaa819ad0848cb4fb79fc367da/diff:/var/lib/docker/overlay2/708816f20892c0e4b94cbe8e80b169fff54ffd870bcde395bd8163dd03b25d0f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220602180121-283122",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220602180121-283122/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220602180121-283122",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220602180121-283122",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220602180121-283122",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3989e814c0ac91c6e28c041e22d15c3ab28ed2a1bbec4bcc9b0a154e6c83dc06",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/3989e814c0ac",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220602180121-283122": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c65299b1d424",
	                        "default-k8s-different-port-20220602180121-283122"
	                    ],
	                    "NetworkID": "c9942ce21648740d1266dd81f85155715256e5c54e7d23ee69de43962d50f431",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]
	
	-- /stdout --
	I0602 18:06:32.070457  569614 cli_runner.go:164] Run: docker logs --timestamps --details default-k8s-different-port-20220602180121-283122
	I0602 18:06:32.110845  569614 errors.go:91] Postmortem logs ("docker logs --timestamps --details default-k8s-different-port-20220602180121-283122"): -- stdout --
	2022-06-02T18:01:29.039763218Z  + userns=
	2022-06-02T18:01:29.039808871Z  + grep -Eqv '0[[:space:]]+0[[:space:]]+4294967295' /proc/self/uid_map
	2022-06-02T18:01:29.043131676Z  + validate_userns
	2022-06-02T18:01:29.043156229Z  + [[ -z '' ]]
	2022-06-02T18:01:29.043160558Z  + return
	2022-06-02T18:01:29.043163822Z  + configure_containerd
	2022-06-02T18:01:29.043167291Z  + local snapshotter=
	2022-06-02T18:01:29.043174761Z  + [[ -n '' ]]
	2022-06-02T18:01:29.043178841Z  + [[ -z '' ]]
	2022-06-02T18:01:29.043596890Z  ++ stat -f -c %T /kind
	2022-06-02T18:01:29.044841327Z  + '[[overlayfs' == zfs ']]'
	2022-06-02T18:01:29.045209626Z  /usr/local/bin/entrypoint: line 112: [[overlayfs: command not found
	2022-06-02T18:01:29.045441945Z  + [[ -n '' ]]
	2022-06-02T18:01:29.045455930Z  + configure_proxy
	2022-06-02T18:01:29.045460082Z  + mkdir -p /etc/systemd/system.conf.d/
	2022-06-02T18:01:29.047278221Z  + [[ ! -z '' ]]
	2022-06-02T18:01:29.047292665Z  + cat
	2022-06-02T18:01:29.048375191Z  + fix_kmsg
	2022-06-02T18:01:29.048387449Z  + [[ ! -e /dev/kmsg ]]
	2022-06-02T18:01:29.048391595Z  + fix_mount
	2022-06-02T18:01:29.048395027Z  + echo 'INFO: ensuring we can execute mount/umount even with userns-remap'
	2022-06-02T18:01:29.048398880Z  INFO: ensuring we can execute mount/umount even with userns-remap
	2022-06-02T18:01:29.048839165Z  ++ which mount
	2022-06-02T18:01:29.050227870Z  ++ which umount
	2022-06-02T18:01:29.051152303Z  + chown root:root /usr/bin/mount /usr/bin/umount
	2022-06-02T18:01:29.057711578Z  ++ which mount
	2022-06-02T18:01:29.059216309Z  ++ which umount
	2022-06-02T18:01:29.060223981Z  + chmod -s /usr/bin/mount /usr/bin/umount
	2022-06-02T18:01:29.062025798Z  +++ which mount
	2022-06-02T18:01:29.062872200Z  ++ stat -f -c %T /usr/bin/mount
	2022-06-02T18:01:29.064034650Z  + [[ overlayfs == \a\u\f\s ]]
	2022-06-02T18:01:29.064050737Z  + echo 'INFO: remounting /sys read-only'
	2022-06-02T18:01:29.064054924Z  INFO: remounting /sys read-only
	2022-06-02T18:01:29.064058959Z  + mount -o remount,ro /sys
	2022-06-02T18:01:29.066140935Z  + echo 'INFO: making mounts shared'
	2022-06-02T18:01:29.066158066Z  INFO: making mounts shared
	2022-06-02T18:01:29.066162494Z  + mount --make-rshared /
	2022-06-02T18:01:29.067684076Z  + retryable_fix_cgroup
	2022-06-02T18:01:29.068068866Z  ++ seq 0 10
	2022-06-02T18:01:29.068839428Z  + for i in $(seq 0 10)
	2022-06-02T18:01:29.068854693Z  + fix_cgroup
	2022-06-02T18:01:29.068868320Z  + [[ -f /sys/fs/cgroup/cgroup.controllers ]]
	2022-06-02T18:01:29.068886421Z  + echo 'INFO: detected cgroup v1'
	2022-06-02T18:01:29.068890147Z  INFO: detected cgroup v1
	2022-06-02T18:01:29.068897825Z  + local current_cgroup
	2022-06-02T18:01:29.069718506Z  ++ grep -E '^[^:]*:([^:]*,)?cpu(,[^,:]*)?:.*' /proc/self/cgroup
	2022-06-02T18:01:29.069806519Z  ++ cut -d: -f3
	2022-06-02T18:01:29.071371976Z  + current_cgroup=/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.071388046Z  + '[' /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 = / ']'
	2022-06-02T18:01:29.071392561Z  + echo 'WARN: cgroupns not enabled! Please use cgroup v2, or cgroup v1 with cgroupns enabled.'
	2022-06-02T18:01:29.071396064Z  WARN: cgroupns not enabled! Please use cgroup v2, or cgroup v1 with cgroupns enabled.
	2022-06-02T18:01:29.071399658Z  + echo 'INFO: fix cgroup mounts for all subsystems'
	2022-06-02T18:01:29.071403061Z  INFO: fix cgroup mounts for all subsystems
	2022-06-02T18:01:29.071452466Z  + local cgroup_subsystems
	2022-06-02T18:01:29.072405715Z  ++ findmnt -lun -o source,target -t cgroup
	2022-06-02T18:01:29.072419959Z  ++ grep /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.073555183Z  ++ awk '{print $2}'
	2022-06-02T18:01:29.074852251Z  + cgroup_subsystems='/sys/fs/cgroup/systemd
	2022-06-02T18:01:29.074867993Z  /sys/fs/cgroup/blkio
	2022-06-02T18:01:29.074872556Z  /sys/fs/cgroup/rdma
	2022-06-02T18:01:29.074875936Z  /sys/fs/cgroup/cpu,cpuacct
	2022-06-02T18:01:29.074879269Z  /sys/fs/cgroup/cpuset
	2022-06-02T18:01:29.074882679Z  /sys/fs/cgroup/pids
	2022-06-02T18:01:29.074886391Z  /sys/fs/cgroup/hugetlb
	2022-06-02T18:01:29.074890046Z  /sys/fs/cgroup/devices
	2022-06-02T18:01:29.074893244Z  /sys/fs/cgroup/memory
	2022-06-02T18:01:29.074895452Z  /sys/fs/cgroup/net_cls,net_prio
	2022-06-02T18:01:29.074897509Z  /sys/fs/cgroup/freezer
	2022-06-02T18:01:29.074899502Z  /sys/fs/cgroup/perf_event'
	2022-06-02T18:01:29.074901499Z  + local unsupported_cgroups
	2022-06-02T18:01:29.077265630Z  ++ findmnt -lun -o source,target -t cgroup
	2022-06-02T18:01:29.077284493Z  ++ grep_allow_nomatch -v /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.077288954Z  ++ grep -v /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.077580628Z  ++ awk '{print $2}'
	2022-06-02T18:01:29.079156039Z  ++ [[ 1 == 1 ]]
	2022-06-02T18:01:29.080028697Z  + unsupported_cgroups=
	2022-06-02T18:01:29.080042623Z  + '[' -n '' ']'
	2022-06-02T18:01:29.080047194Z  + local cgroup_mounts
	2022-06-02T18:01:29.080509206Z  ++ grep -E -o '/[[:alnum:]].* /sys/fs/cgroup.*.*cgroup' /proc/self/mountinfo
	2022-06-02T18:01:29.082816196Z  + cgroup_mounts='/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:378 master:9 - cgroup cgroup
	2022-06-02T18:01:29.082834531Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:379 master:16 - cgroup cgroup
	2022-06-02T18:01:29.082839599Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime shared:380 master:17 - cgroup cgroup
	2022-06-02T18:01:29.082843150Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:381 master:18 - cgroup cgroup
	2022-06-02T18:01:29.082846751Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:383 master:19 - cgroup cgroup
	2022-06-02T18:01:29.082850331Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:388 master:20 - cgroup cgroup
	2022-06-02T18:01:29.082854001Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:389 master:21 - cgroup cgroup
	2022-06-02T18:01:29.082858232Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:390 master:22 - cgroup cgroup
	2022-06-02T18:01:29.082861905Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:391 master:23 - cgroup cgroup
	2022-06-02T18:01:29.082865679Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:392 master:24 - cgroup cgroup
	2022-06-02T18:01:29.082869457Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:393 master:25 - cgroup cgroup
	2022-06-02T18:01:29.082873329Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:395 master:26 - cgroup cgroup'
	2022-06-02T18:01:29.082878095Z  + [[ -n /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:378 master:9 - cgroup cgroup
	2022-06-02T18:01:29.082881803Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:379 master:16 - cgroup cgroup
	2022-06-02T18:01:29.082885080Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime shared:380 master:17 - cgroup cgroup
	2022-06-02T18:01:29.082888357Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:381 master:18 - cgroup cgroup
	2022-06-02T18:01:29.082891497Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:383 master:19 - cgroup cgroup
	2022-06-02T18:01:29.082906694Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:388 master:20 - cgroup cgroup
	2022-06-02T18:01:29.082910816Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:389 master:21 - cgroup cgroup
	2022-06-02T18:01:29.082914287Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:390 master:22 - cgroup cgroup
	2022-06-02T18:01:29.082917282Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:391 master:23 - cgroup cgroup
	2022-06-02T18:01:29.082920897Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:392 master:24 - cgroup cgroup
	2022-06-02T18:01:29.082924515Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:393 master:25 - cgroup cgroup
	2022-06-02T18:01:29.082928009Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:395 master:26 - cgroup cgroup ]]
	2022-06-02T18:01:29.082931609Z  + local mount_root
	2022-06-02T18:01:29.083657574Z  ++ head -n 1
	2022-06-02T18:01:29.083802749Z  ++ cut '-d ' -f1
	2022-06-02T18:01:29.085146587Z  + mount_root=/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.085932747Z  ++ echo '/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:378 master:9 - cgroup cgroup
	2022-06-02T18:01:29.086225768Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:379 master:16 - cgroup cgroup
	2022-06-02T18:01:29.086232271Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime shared:380 master:17 - cgroup cgroup
	2022-06-02T18:01:29.086234927Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:381 master:18 - cgroup cgroup
	2022-06-02T18:01:29.086237382Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:383 master:19 - cgroup cgroup
	2022-06-02T18:01:29.086240454Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:388 master:20 - cgroup cgroup
	2022-06-02T18:01:29.086242903Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:389 master:21 - cgroup cgroup
	2022-06-02T18:01:29.086245264Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:390 master:22 - cgroup cgroup
	2022-06-02T18:01:29.086247665Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:391 master:23 - cgroup cgroup
	2022-06-02T18:01:29.086261274Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:392 master:24 - cgroup cgroup
	2022-06-02T18:01:29.086263921Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:393 master:25 - cgroup cgroup
	2022-06-02T18:01:29.086266186Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:395 master:26 - cgroup cgroup'
	2022-06-02T18:01:29.086269528Z  ++ cut '-d ' -f 2
	2022-06-02T18:01:29.087144982Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T18:01:29.087160489Z  + local target=/sys/fs/cgroup/systemd/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.087164281Z  + findmnt /sys/fs/cgroup/systemd/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.089146043Z  + mkdir -p /sys/fs/cgroup/systemd/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.090306813Z  + mount --bind /sys/fs/cgroup/systemd /sys/fs/cgroup/systemd/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.091962127Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T18:01:29.091976378Z  + local target=/sys/fs/cgroup/blkio/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.091986075Z  + findmnt /sys/fs/cgroup/blkio/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.094712159Z  + mkdir -p /sys/fs/cgroup/blkio/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.096024519Z  + mount --bind /sys/fs/cgroup/blkio /sys/fs/cgroup/blkio/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.097441634Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T18:01:29.097458429Z  + local target=/sys/fs/cgroup/rdma/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.097462950Z  + findmnt /sys/fs/cgroup/rdma/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.099280363Z  + mkdir -p /sys/fs/cgroup/rdma/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.100422130Z  + mount --bind /sys/fs/cgroup/rdma /sys/fs/cgroup/rdma/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.102093423Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T18:01:29.102111378Z  + local target=/sys/fs/cgroup/cpu,cpuacct/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.102115800Z  + findmnt /sys/fs/cgroup/cpu,cpuacct/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.104384013Z  + mkdir -p /sys/fs/cgroup/cpu,cpuacct/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.105837434Z  + mount --bind /sys/fs/cgroup/cpu,cpuacct /sys/fs/cgroup/cpu,cpuacct/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.107478835Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T18:01:29.107494765Z  + local target=/sys/fs/cgroup/cpuset/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.107556704Z  + findmnt /sys/fs/cgroup/cpuset/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.110130312Z  + mkdir -p /sys/fs/cgroup/cpuset/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.137915941Z  + mount --bind /sys/fs/cgroup/cpuset /sys/fs/cgroup/cpuset/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.139418172Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T18:01:29.139441310Z  + local target=/sys/fs/cgroup/pids/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.139446247Z  + findmnt /sys/fs/cgroup/pids/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.141865239Z  + mkdir -p /sys/fs/cgroup/pids/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.143399838Z  + mount --bind /sys/fs/cgroup/pids /sys/fs/cgroup/pids/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.144898706Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T18:01:29.144920310Z  + local target=/sys/fs/cgroup/hugetlb/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.144925030Z  + findmnt /sys/fs/cgroup/hugetlb/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.147091310Z  + mkdir -p /sys/fs/cgroup/hugetlb/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.148307905Z  + mount --bind /sys/fs/cgroup/hugetlb /sys/fs/cgroup/hugetlb/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.149822485Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T18:01:29.149843406Z  + local target=/sys/fs/cgroup/devices/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.149848272Z  + findmnt /sys/fs/cgroup/devices/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.151915288Z  + mkdir -p /sys/fs/cgroup/devices/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.153114598Z  + mount --bind /sys/fs/cgroup/devices /sys/fs/cgroup/devices/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.154453916Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T18:01:29.154466522Z  + local target=/sys/fs/cgroup/memory/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.154480465Z  + findmnt /sys/fs/cgroup/memory/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.156225539Z  + mkdir -p /sys/fs/cgroup/memory/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.157369269Z  + mount --bind /sys/fs/cgroup/memory /sys/fs/cgroup/memory/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.158724076Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T18:01:29.158743186Z  + local target=/sys/fs/cgroup/net_cls,net_prio/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.158749761Z  + findmnt /sys/fs/cgroup/net_cls,net_prio/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.161726755Z  + mkdir -p /sys/fs/cgroup/net_cls,net_prio/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.163113343Z  + mount --bind /sys/fs/cgroup/net_cls,net_prio /sys/fs/cgroup/net_cls,net_prio/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.165269668Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T18:01:29.165288549Z  + local target=/sys/fs/cgroup/freezer/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.165293227Z  + findmnt /sys/fs/cgroup/freezer/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.167765308Z  + mkdir -p /sys/fs/cgroup/freezer/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.169057041Z  + mount --bind /sys/fs/cgroup/freezer /sys/fs/cgroup/freezer/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.170752265Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T18:01:29.170766283Z  + local target=/sys/fs/cgroup/perf_event/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.170769350Z  + findmnt /sys/fs/cgroup/perf_event/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.172757082Z  + mkdir -p /sys/fs/cgroup/perf_event/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.173831049Z  + mount --bind /sys/fs/cgroup/perf_event /sys/fs/cgroup/perf_event/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.175207466Z  + mount --make-rprivate /sys/fs/cgroup
	2022-06-02T18:01:29.177164182Z  + echo '/sys/fs/cgroup/systemd
	2022-06-02T18:01:29.177187926Z  /sys/fs/cgroup/blkio
	2022-06-02T18:01:29.177192392Z  /sys/fs/cgroup/rdma
	2022-06-02T18:01:29.177195842Z  /sys/fs/cgroup/cpu,cpuacct
	2022-06-02T18:01:29.177198902Z  /sys/fs/cgroup/cpuset
	2022-06-02T18:01:29.177202367Z  /sys/fs/cgroup/pids
	2022-06-02T18:01:29.177220227Z  /sys/fs/cgroup/hugetlb
	2022-06-02T18:01:29.177223428Z  /sys/fs/cgroup/devices
	2022-06-02T18:01:29.177226365Z  /sys/fs/cgroup/memory
	2022-06-02T18:01:29.177229728Z  /sys/fs/cgroup/net_cls,net_prio
	2022-06-02T18:01:29.177232715Z  /sys/fs/cgroup/freezer
	2022-06-02T18:01:29.177236010Z  /sys/fs/cgroup/perf_event'
	2022-06-02T18:01:29.177246576Z  + IFS=
	2022-06-02T18:01:29.177249807Z  + read -r subsystem
	2022-06-02T18:01:29.177760415Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/systemd
	2022-06-02T18:01:29.177776349Z  + local cgroup_root=/kubelet
	2022-06-02T18:01:29.177780337Z  + local subsystem=/sys/fs/cgroup/systemd
	2022-06-02T18:01:29.177783685Z  + '[' -z /kubelet ']'
	2022-06-02T18:01:29.177786769Z  + mkdir -p /sys/fs/cgroup/systemd//kubelet
	2022-06-02T18:01:29.179120179Z  + '[' /sys/fs/cgroup/systemd == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.179138452Z  + mount --bind /sys/fs/cgroup/systemd//kubelet /sys/fs/cgroup/systemd//kubelet
	2022-06-02T18:01:29.180571532Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/systemd
	2022-06-02T18:01:29.180588028Z  + local cgroup_root=/kubelet.slice
	2022-06-02T18:01:29.180592028Z  + local subsystem=/sys/fs/cgroup/systemd
	2022-06-02T18:01:29.180595091Z  + '[' -z /kubelet.slice ']'
	2022-06-02T18:01:29.180598646Z  + mkdir -p /sys/fs/cgroup/systemd//kubelet.slice
	2022-06-02T18:01:29.182121519Z  + '[' /sys/fs/cgroup/systemd == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.182709656Z  + mount --bind /sys/fs/cgroup/systemd//kubelet.slice /sys/fs/cgroup/systemd//kubelet.slice
	2022-06-02T18:01:29.184406295Z  + IFS=
	2022-06-02T18:01:29.184422892Z  + read -r subsystem
	2022-06-02T18:01:29.184427101Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/blkio
	2022-06-02T18:01:29.184430642Z  + local cgroup_root=/kubelet
	2022-06-02T18:01:29.184434128Z  + local subsystem=/sys/fs/cgroup/blkio
	2022-06-02T18:01:29.184437480Z  + '[' -z /kubelet ']'
	2022-06-02T18:01:29.184441210Z  + mkdir -p /sys/fs/cgroup/blkio//kubelet
	2022-06-02T18:01:29.185637645Z  + '[' /sys/fs/cgroup/blkio == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.185653509Z  + mount --bind /sys/fs/cgroup/blkio//kubelet /sys/fs/cgroup/blkio//kubelet
	2022-06-02T18:01:29.187365903Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/blkio
	2022-06-02T18:01:29.187381214Z  + local cgroup_root=/kubelet.slice
	2022-06-02T18:01:29.187385584Z  + local subsystem=/sys/fs/cgroup/blkio
	2022-06-02T18:01:29.187389572Z  + '[' -z /kubelet.slice ']'
	2022-06-02T18:01:29.187564227Z  + mkdir -p /sys/fs/cgroup/blkio//kubelet.slice
	2022-06-02T18:01:29.188958319Z  + '[' /sys/fs/cgroup/blkio == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.188985214Z  + mount --bind /sys/fs/cgroup/blkio//kubelet.slice /sys/fs/cgroup/blkio//kubelet.slice
	2022-06-02T18:01:29.192597279Z  + IFS=
	2022-06-02T18:01:29.192615219Z  + read -r subsystem
	2022-06-02T18:01:29.192619339Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/rdma
	2022-06-02T18:01:29.192657914Z  + local cgroup_root=/kubelet
	2022-06-02T18:01:29.192674053Z  + local subsystem=/sys/fs/cgroup/rdma
	2022-06-02T18:01:29.192676982Z  + '[' -z /kubelet ']'
	2022-06-02T18:01:29.192679275Z  + mkdir -p /sys/fs/cgroup/rdma//kubelet
	2022-06-02T18:01:29.194145746Z  + '[' /sys/fs/cgroup/rdma == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.194162398Z  + mount --bind /sys/fs/cgroup/rdma//kubelet /sys/fs/cgroup/rdma//kubelet
	2022-06-02T18:01:29.195577859Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/rdma
	2022-06-02T18:01:29.195595824Z  + local cgroup_root=/kubelet.slice
	2022-06-02T18:01:29.195600244Z  + local subsystem=/sys/fs/cgroup/rdma
	2022-06-02T18:01:29.195603946Z  + '[' -z /kubelet.slice ']'
	2022-06-02T18:01:29.195607656Z  + mkdir -p /sys/fs/cgroup/rdma//kubelet.slice
	2022-06-02T18:01:29.196632358Z  + '[' /sys/fs/cgroup/rdma == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.196646035Z  + mount --bind /sys/fs/cgroup/rdma//kubelet.slice /sys/fs/cgroup/rdma//kubelet.slice
	2022-06-02T18:01:29.198116187Z  + IFS=
	2022-06-02T18:01:29.198134884Z  + read -r subsystem
	2022-06-02T18:01:29.198139225Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpu,cpuacct
	2022-06-02T18:01:29.198143142Z  + local cgroup_root=/kubelet
	2022-06-02T18:01:29.198146898Z  + local subsystem=/sys/fs/cgroup/cpu,cpuacct
	2022-06-02T18:01:29.198154972Z  + '[' -z /kubelet ']'
	2022-06-02T18:01:29.198158880Z  + mkdir -p /sys/fs/cgroup/cpu,cpuacct//kubelet
	2022-06-02T18:01:29.199366707Z  + '[' /sys/fs/cgroup/cpu,cpuacct == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.199382497Z  + mount --bind /sys/fs/cgroup/cpu,cpuacct//kubelet /sys/fs/cgroup/cpu,cpuacct//kubelet
	2022-06-02T18:01:29.200860053Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/cpu,cpuacct
	2022-06-02T18:01:29.200876151Z  + local cgroup_root=/kubelet.slice
	2022-06-02T18:01:29.200880474Z  + local subsystem=/sys/fs/cgroup/cpu,cpuacct
	2022-06-02T18:01:29.200884161Z  + '[' -z /kubelet.slice ']'
	2022-06-02T18:01:29.200887658Z  + mkdir -p /sys/fs/cgroup/cpu,cpuacct//kubelet.slice
	2022-06-02T18:01:29.202203786Z  + '[' /sys/fs/cgroup/cpu,cpuacct == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.202226790Z  + mount --bind /sys/fs/cgroup/cpu,cpuacct//kubelet.slice /sys/fs/cgroup/cpu,cpuacct//kubelet.slice
	2022-06-02T18:01:29.203680615Z  + IFS=
	2022-06-02T18:01:29.203698566Z  + read -r subsystem
	2022-06-02T18:01:29.203718856Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpuset
	2022-06-02T18:01:29.203723726Z  + local cgroup_root=/kubelet
	2022-06-02T18:01:29.203727371Z  + local subsystem=/sys/fs/cgroup/cpuset
	2022-06-02T18:01:29.203744975Z  + '[' -z /kubelet ']'
	2022-06-02T18:01:29.203748997Z  + mkdir -p /sys/fs/cgroup/cpuset//kubelet
	2022-06-02T18:01:29.237608820Z  + '[' /sys/fs/cgroup/cpuset == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.237637790Z  + cat /sys/fs/cgroup/cpuset/cpuset.cpus
	2022-06-02T18:01:29.239921660Z  + cat /sys/fs/cgroup/cpuset/cpuset.mems
	2022-06-02T18:01:29.241292873Z  + mount --bind /sys/fs/cgroup/cpuset//kubelet /sys/fs/cgroup/cpuset//kubelet
	2022-06-02T18:01:29.243779504Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/cpuset
	2022-06-02T18:01:29.243801445Z  + local cgroup_root=/kubelet.slice
	2022-06-02T18:01:29.243806119Z  + local subsystem=/sys/fs/cgroup/cpuset
	2022-06-02T18:01:29.243818479Z  + '[' -z /kubelet.slice ']'
	2022-06-02T18:01:29.243823695Z  + mkdir -p /sys/fs/cgroup/cpuset//kubelet.slice
	2022-06-02T18:01:29.247186658Z  + '[' /sys/fs/cgroup/cpuset == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.247217554Z  + cat /sys/fs/cgroup/cpuset/cpuset.cpus
	2022-06-02T18:01:29.249189687Z  + cat /sys/fs/cgroup/cpuset/cpuset.mems
	2022-06-02T18:01:29.250073305Z  + mount --bind /sys/fs/cgroup/cpuset//kubelet.slice /sys/fs/cgroup/cpuset//kubelet.slice
	2022-06-02T18:01:29.251569478Z  + IFS=
	2022-06-02T18:01:29.251589131Z  + read -r subsystem
	2022-06-02T18:01:29.251593237Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/pids
	2022-06-02T18:01:29.251596674Z  + local cgroup_root=/kubelet
	2022-06-02T18:01:29.251600251Z  + local subsystem=/sys/fs/cgroup/pids
	2022-06-02T18:01:29.251603378Z  + '[' -z /kubelet ']'
	2022-06-02T18:01:29.251606711Z  + mkdir -p /sys/fs/cgroup/pids//kubelet
	2022-06-02T18:01:29.252740303Z  + '[' /sys/fs/cgroup/pids == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.252754785Z  + mount --bind /sys/fs/cgroup/pids//kubelet /sys/fs/cgroup/pids//kubelet
	2022-06-02T18:01:29.254111787Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/pids
	2022-06-02T18:01:29.254126404Z  + local cgroup_root=/kubelet.slice
	2022-06-02T18:01:29.254129309Z  + local subsystem=/sys/fs/cgroup/pids
	2022-06-02T18:01:29.254131624Z  + '[' -z /kubelet.slice ']'
	2022-06-02T18:01:29.254134036Z  + mkdir -p /sys/fs/cgroup/pids//kubelet.slice
	2022-06-02T18:01:29.255392031Z  + '[' /sys/fs/cgroup/pids == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.255408263Z  + mount --bind /sys/fs/cgroup/pids//kubelet.slice /sys/fs/cgroup/pids//kubelet.slice
	2022-06-02T18:01:29.256812147Z  + IFS=
	2022-06-02T18:01:29.256841585Z  + read -r subsystem
	2022-06-02T18:01:29.256845927Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/hugetlb
	2022-06-02T18:01:29.256930513Z  + local cgroup_root=/kubelet
	2022-06-02T18:01:29.256944065Z  + local subsystem=/sys/fs/cgroup/hugetlb
	2022-06-02T18:01:29.256947127Z  + '[' -z /kubelet ']'
	2022-06-02T18:01:29.256949358Z  + mkdir -p /sys/fs/cgroup/hugetlb//kubelet
	2022-06-02T18:01:29.258164309Z  + '[' /sys/fs/cgroup/hugetlb == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.258180158Z  + mount --bind /sys/fs/cgroup/hugetlb//kubelet /sys/fs/cgroup/hugetlb//kubelet
	2022-06-02T18:01:29.259446400Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/hugetlb
	2022-06-02T18:01:29.259459305Z  + local cgroup_root=/kubelet.slice
	2022-06-02T18:01:29.259463348Z  + local subsystem=/sys/fs/cgroup/hugetlb
	2022-06-02T18:01:29.259467128Z  + '[' -z /kubelet.slice ']'
	2022-06-02T18:01:29.259470930Z  + mkdir -p /sys/fs/cgroup/hugetlb//kubelet.slice
	2022-06-02T18:01:29.260777885Z  + '[' /sys/fs/cgroup/hugetlb == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.260795760Z  + mount --bind /sys/fs/cgroup/hugetlb//kubelet.slice /sys/fs/cgroup/hugetlb//kubelet.slice
	2022-06-02T18:01:29.262131205Z  + IFS=
	2022-06-02T18:01:29.262149142Z  + read -r subsystem
	2022-06-02T18:01:29.262153384Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/devices
	2022-06-02T18:01:29.262157515Z  + local cgroup_root=/kubelet
	2022-06-02T18:01:29.262161127Z  + local subsystem=/sys/fs/cgroup/devices
	2022-06-02T18:01:29.262164486Z  + '[' -z /kubelet ']'
	2022-06-02T18:01:29.262218521Z  + mkdir -p /sys/fs/cgroup/devices//kubelet
	2022-06-02T18:01:29.263394209Z  + '[' /sys/fs/cgroup/devices == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.263405702Z  + mount --bind /sys/fs/cgroup/devices//kubelet /sys/fs/cgroup/devices//kubelet
	2022-06-02T18:01:29.264772219Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/devices
	2022-06-02T18:01:29.264788042Z  + local cgroup_root=/kubelet.slice
	2022-06-02T18:01:29.264792006Z  + local subsystem=/sys/fs/cgroup/devices
	2022-06-02T18:01:29.264795401Z  + '[' -z /kubelet.slice ']'
	2022-06-02T18:01:29.264799072Z  + mkdir -p /sys/fs/cgroup/devices//kubelet.slice
	2022-06-02T18:01:29.265986529Z  + '[' /sys/fs/cgroup/devices == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.266001907Z  + mount --bind /sys/fs/cgroup/devices//kubelet.slice /sys/fs/cgroup/devices//kubelet.slice
	2022-06-02T18:01:29.267281308Z  + IFS=
	2022-06-02T18:01:29.267291915Z  + read -r subsystem
	2022-06-02T18:01:29.267295594Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/memory
	2022-06-02T18:01:29.267299004Z  + local cgroup_root=/kubelet
	2022-06-02T18:01:29.267314907Z  + local subsystem=/sys/fs/cgroup/memory
	2022-06-02T18:01:29.267318303Z  + '[' -z /kubelet ']'
	2022-06-02T18:01:29.267321966Z  + mkdir -p /sys/fs/cgroup/memory//kubelet
	2022-06-02T18:01:29.268473700Z  + '[' /sys/fs/cgroup/memory == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.268486955Z  + mount --bind /sys/fs/cgroup/memory//kubelet /sys/fs/cgroup/memory//kubelet
	2022-06-02T18:01:29.269825883Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/memory
	2022-06-02T18:01:29.269836824Z  + local cgroup_root=/kubelet.slice
	2022-06-02T18:01:29.269840246Z  + local subsystem=/sys/fs/cgroup/memory
	2022-06-02T18:01:29.269843732Z  + '[' -z /kubelet.slice ']'
	2022-06-02T18:01:29.269847057Z  + mkdir -p /sys/fs/cgroup/memory//kubelet.slice
	2022-06-02T18:01:29.270851481Z  + '[' /sys/fs/cgroup/memory == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.270866091Z  + mount --bind /sys/fs/cgroup/memory//kubelet.slice /sys/fs/cgroup/memory//kubelet.slice
	2022-06-02T18:01:29.272059743Z  + IFS=
	2022-06-02T18:01:29.272076780Z  + read -r subsystem
	2022-06-02T18:01:29.272081903Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/net_cls,net_prio
	2022-06-02T18:01:29.272085760Z  + local cgroup_root=/kubelet
	2022-06-02T18:01:29.272089276Z  + local subsystem=/sys/fs/cgroup/net_cls,net_prio
	2022-06-02T18:01:29.272092643Z  + '[' -z /kubelet ']'
	2022-06-02T18:01:29.272096041Z  + mkdir -p /sys/fs/cgroup/net_cls,net_prio//kubelet
	2022-06-02T18:01:29.273172225Z  + '[' /sys/fs/cgroup/net_cls,net_prio == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.273186591Z  + mount --bind /sys/fs/cgroup/net_cls,net_prio//kubelet /sys/fs/cgroup/net_cls,net_prio//kubelet
	2022-06-02T18:01:29.274436179Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/net_cls,net_prio
	2022-06-02T18:01:29.274450315Z  + local cgroup_root=/kubelet.slice
	2022-06-02T18:01:29.274454109Z  + local subsystem=/sys/fs/cgroup/net_cls,net_prio
	2022-06-02T18:01:29.274457360Z  + '[' -z /kubelet.slice ']'
	2022-06-02T18:01:29.274460871Z  + mkdir -p /sys/fs/cgroup/net_cls,net_prio//kubelet.slice
	2022-06-02T18:01:29.275423077Z  + '[' /sys/fs/cgroup/net_cls,net_prio == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.275438015Z  + mount --bind /sys/fs/cgroup/net_cls,net_prio//kubelet.slice /sys/fs/cgroup/net_cls,net_prio//kubelet.slice
	2022-06-02T18:01:29.276707270Z  + IFS=
	2022-06-02T18:01:29.276735605Z  + read -r subsystem
	2022-06-02T18:01:29.276740667Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/freezer
	2022-06-02T18:01:29.276744095Z  + local cgroup_root=/kubelet
	2022-06-02T18:01:29.276747512Z  + local subsystem=/sys/fs/cgroup/freezer
	2022-06-02T18:01:29.276751042Z  + '[' -z /kubelet ']'
	2022-06-02T18:01:29.276754760Z  + mkdir -p /sys/fs/cgroup/freezer//kubelet
	2022-06-02T18:01:29.277980282Z  + '[' /sys/fs/cgroup/freezer == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.277995548Z  + mount --bind /sys/fs/cgroup/freezer//kubelet /sys/fs/cgroup/freezer//kubelet
	2022-06-02T18:01:29.279171123Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/freezer
	2022-06-02T18:01:29.279185525Z  + local cgroup_root=/kubelet.slice
	2022-06-02T18:01:29.279189626Z  + local subsystem=/sys/fs/cgroup/freezer
	2022-06-02T18:01:29.279193536Z  + '[' -z /kubelet.slice ']'
	2022-06-02T18:01:29.279197005Z  + mkdir -p /sys/fs/cgroup/freezer//kubelet.slice
	2022-06-02T18:01:29.280533530Z  + '[' /sys/fs/cgroup/freezer == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.280545457Z  + mount --bind /sys/fs/cgroup/freezer//kubelet.slice /sys/fs/cgroup/freezer//kubelet.slice
	2022-06-02T18:01:29.281822406Z  + IFS=
	2022-06-02T18:01:29.281838306Z  + read -r subsystem
	2022-06-02T18:01:29.281842894Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/perf_event
	2022-06-02T18:01:29.281846506Z  + local cgroup_root=/kubelet
	2022-06-02T18:01:29.281849608Z  + local subsystem=/sys/fs/cgroup/perf_event
	2022-06-02T18:01:29.281855327Z  + '[' -z /kubelet ']'
	2022-06-02T18:01:29.281858791Z  + mkdir -p /sys/fs/cgroup/perf_event//kubelet
	2022-06-02T18:01:29.283032331Z  + '[' /sys/fs/cgroup/perf_event == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.283048168Z  + mount --bind /sys/fs/cgroup/perf_event//kubelet /sys/fs/cgroup/perf_event//kubelet
	2022-06-02T18:01:29.284184484Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/perf_event
	2022-06-02T18:01:29.284197566Z  + local cgroup_root=/kubelet.slice
	2022-06-02T18:01:29.284201459Z  + local subsystem=/sys/fs/cgroup/perf_event
	2022-06-02T18:01:29.284205031Z  + '[' -z /kubelet.slice ']'
	2022-06-02T18:01:29.284208643Z  + mkdir -p /sys/fs/cgroup/perf_event//kubelet.slice
	2022-06-02T18:01:29.285270234Z  + '[' /sys/fs/cgroup/perf_event == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.285283788Z  + mount --bind /sys/fs/cgroup/perf_event//kubelet.slice /sys/fs/cgroup/perf_event//kubelet.slice
	2022-06-02T18:01:29.286511300Z  + IFS=
	2022-06-02T18:01:29.286525835Z  + read -r subsystem
	2022-06-02T18:01:29.286827738Z  + return
	2022-06-02T18:01:29.286841924Z  + fix_machine_id
	2022-06-02T18:01:29.286845970Z  + echo 'INFO: clearing and regenerating /etc/machine-id'
	2022-06-02T18:01:29.286850027Z  INFO: clearing and regenerating /etc/machine-id
	2022-06-02T18:01:29.286855018Z  + rm -f /etc/machine-id
	2022-06-02T18:01:29.287852401Z  + systemd-machine-id-setup
	2022-06-02T18:01:29.291555583Z  Initializing machine ID from D-Bus machine ID.
	2022-06-02T18:01:29.294608659Z  + fix_product_name
	2022-06-02T18:01:29.294646421Z  + [[ -f /sys/class/dmi/id/product_name ]]
	2022-06-02T18:01:29.294651254Z  + echo 'INFO: faking /sys/class/dmi/id/product_name to be "kind"'
	2022-06-02T18:01:29.294658461Z  INFO: faking /sys/class/dmi/id/product_name to be "kind"
	2022-06-02T18:01:29.294662429Z  + echo kind
	2022-06-02T18:01:29.294848199Z  + mount -o ro,bind /kind/product_name /sys/class/dmi/id/product_name
	2022-06-02T18:01:29.296489128Z  + fix_product_uuid
	2022-06-02T18:01:29.296501596Z  + [[ ! -f /kind/product_uuid ]]
	2022-06-02T18:01:29.296504132Z  + cat /proc/sys/kernel/random/uuid
	2022-06-02T18:01:29.297584756Z  + [[ -f /sys/class/dmi/id/product_uuid ]]
	2022-06-02T18:01:29.297604288Z  + echo 'INFO: faking /sys/class/dmi/id/product_uuid to be random'
	2022-06-02T18:01:29.297608023Z  INFO: faking /sys/class/dmi/id/product_uuid to be random
	2022-06-02T18:01:29.297610883Z  + mount -o ro,bind /kind/product_uuid /sys/class/dmi/id/product_uuid
	2022-06-02T18:01:29.298927791Z  + [[ -f /sys/devices/virtual/dmi/id/product_uuid ]]
	2022-06-02T18:01:29.298939874Z  + echo 'INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well'
	2022-06-02T18:01:29.298942624Z  INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
	2022-06-02T18:01:29.298944832Z  + mount -o ro,bind /kind/product_uuid /sys/devices/virtual/dmi/id/product_uuid
	2022-06-02T18:01:29.300500332Z  + select_iptables
	2022-06-02T18:01:29.300515241Z  + local mode num_legacy_lines num_nft_lines
	2022-06-02T18:01:29.301436075Z  ++ grep -c '^-'
	2022-06-02T18:01:29.305171416Z  + num_legacy_lines=6
	2022-06-02T18:01:29.305954579Z  ++ grep -c '^-'
	2022-06-02T18:01:29.310092390Z  ++ true
	2022-06-02T18:01:29.310273248Z  + num_nft_lines=0
	2022-06-02T18:01:29.310283889Z  + '[' 6 -ge 0 ']'
	2022-06-02T18:01:29.310355517Z  + mode=legacy
	2022-06-02T18:01:29.310373925Z  + echo 'INFO: setting iptables to detected mode: legacy'
	2022-06-02T18:01:29.310378405Z  INFO: setting iptables to detected mode: legacy
	2022-06-02T18:01:29.310382032Z  + update-alternatives --set iptables /usr/sbin/iptables-legacy
	2022-06-02T18:01:29.310414391Z  + echo 'retryable update-alternatives: --set iptables /usr/sbin/iptables-legacy'
	2022-06-02T18:01:29.310428412Z  + local 'args=--set iptables /usr/sbin/iptables-legacy'
	2022-06-02T18:01:29.310924243Z  ++ seq 0 15
	2022-06-02T18:01:29.311528431Z  + for i in $(seq 0 15)
	2022-06-02T18:01:29.311544440Z  + /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-legacy
	2022-06-02T18:01:29.314951921Z  + return
	2022-06-02T18:01:29.314971640Z  + update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
	2022-06-02T18:01:29.315112840Z  + echo 'retryable update-alternatives: --set ip6tables /usr/sbin/ip6tables-legacy'
	2022-06-02T18:01:29.315133627Z  + local 'args=--set ip6tables /usr/sbin/ip6tables-legacy'
	2022-06-02T18:01:29.315508413Z  ++ seq 0 15
	2022-06-02T18:01:29.316171506Z  + for i in $(seq 0 15)
	2022-06-02T18:01:29.316180783Z  + /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
	2022-06-02T18:01:29.319231649Z  + return
	2022-06-02T18:01:29.319252497Z  + enable_network_magic
	2022-06-02T18:01:29.319261361Z  + local docker_embedded_dns_ip=127.0.0.11
	2022-06-02T18:01:29.319265285Z  + local docker_host_ip
	2022-06-02T18:01:29.320449615Z  ++ cut '-d ' -f1
	2022-06-02T18:01:29.320601990Z  ++ head -n1 /dev/fd/63
	2022-06-02T18:01:29.320676505Z  +++ getent ahostsv4 host.docker.internal
	2022-06-02T18:01:29.337394062Z  + docker_host_ip=
	2022-06-02T18:01:29.337422978Z  + [[ -z '' ]]
	2022-06-02T18:01:29.338119728Z  ++ ip -4 route show default
	2022-06-02T18:01:29.338213560Z  ++ cut '-d ' -f3
	2022-06-02T18:01:29.339783648Z  + docker_host_ip=192.168.67.1
	2022-06-02T18:01:29.340089618Z  + iptables-save
	2022-06-02T18:01:29.341240737Z  + iptables-restore
	2022-06-02T18:01:29.341915326Z  + sed -e 's/-d 127.0.0.11/-d 192.168.67.1/g' -e 's/-A OUTPUT \(.*\) -j DOCKER_OUTPUT/\0\n-A PREROUTING \1 -j DOCKER_OUTPUT/' -e 's/--to-source :53/--to-source 192.168.67.1:53/g'
	2022-06-02T18:01:29.344938104Z  + cp /etc/resolv.conf /etc/resolv.conf.original
	2022-06-02T18:01:29.346371079Z  + sed -e s/127.0.0.11/192.168.67.1/g /etc/resolv.conf.original
	2022-06-02T18:01:29.348833391Z  ++ cut '-d ' -f1
	2022-06-02T18:01:29.348982790Z  ++ head -n1 /dev/fd/63
	2022-06-02T18:01:29.349573906Z  ++++ hostname
	2022-06-02T18:01:29.350243315Z  +++ getent ahostsv4 default-k8s-different-port-20220602180121-283122
	2022-06-02T18:01:29.352149497Z  + curr_ipv4=192.168.67.2
	2022-06-02T18:01:29.352164941Z  + echo 'INFO: Detected IPv4 address: 192.168.67.2'
	2022-06-02T18:01:29.352169101Z  INFO: Detected IPv4 address: 192.168.67.2
	2022-06-02T18:01:29.352172947Z  + '[' -f /kind/old-ipv4 ']'
	2022-06-02T18:01:29.352218828Z  + [[ -n 192.168.67.2 ]]
	2022-06-02T18:01:29.352232732Z  + echo -n 192.168.67.2
	2022-06-02T18:01:29.353484475Z  ++ cut '-d ' -f1
	2022-06-02T18:01:29.353500787Z  ++ head -n1 /dev/fd/63
	2022-06-02T18:01:29.354044710Z  ++++ hostname
	2022-06-02T18:01:29.354661546Z  +++ getent ahostsv6 default-k8s-different-port-20220602180121-283122
	2022-06-02T18:01:29.356229784Z  + curr_ipv6=
	2022-06-02T18:01:29.356243241Z  + echo 'INFO: Detected IPv6 address: '
	2022-06-02T18:01:29.356247285Z  INFO: Detected IPv6 address: 
	2022-06-02T18:01:29.356264408Z  + '[' -f /kind/old-ipv6 ']'
	2022-06-02T18:01:29.356269624Z  + [[ -n '' ]]
	2022-06-02T18:01:29.356725003Z  ++ uname -a
	2022-06-02T18:01:29.357466503Z  + echo 'entrypoint completed: Linux default-k8s-different-port-20220602180121-283122 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux'
	2022-06-02T18:01:29.357481318Z  entrypoint completed: Linux default-k8s-different-port-20220602180121-283122 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	2022-06-02T18:01:29.357485435Z  + exec /sbin/init
	2022-06-02T18:01:29.363773075Z  systemd 245.4-4ubuntu3.17 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
	2022-06-02T18:01:29.363790846Z  Detected virtualization docker.
	2022-06-02T18:01:29.363793756Z  Detected architecture x86-64.
	2022-06-02T18:01:29.364127978Z  
	2022-06-02T18:01:29.364144348Z  Welcome to Ubuntu 20.04.4 LTS!
	2022-06-02T18:01:29.364148832Z  
	2022-06-02T18:01:29.364152189Z  Set hostname to <default-k8s-different-port-20220602180121-283122>.
	2022-06-02T18:01:29.406434747Z  [  OK  ] Started Dispatch Password …ts to Console Directory Watch.
	2022-06-02T18:01:29.406641471Z  [  OK  ] Set up automount Arbitrary…s File System Automount Point.
	2022-06-02T18:01:29.406660762Z  [  OK  ] Reached target Local Encrypted Volumes.
	2022-06-02T18:01:29.406665847Z  [  OK  ] Reached target Network is Online.
	2022-06-02T18:01:29.406669846Z  [  OK  ] Reached target Paths.
	2022-06-02T18:01:29.406696304Z  [  OK  ] Reached target Slices.
	2022-06-02T18:01:29.406705400Z  [  OK  ] Reached target Swap.
	2022-06-02T18:01:29.406949554Z  [  OK  ] Listening on Journal Audit Socket.
	2022-06-02T18:01:29.407034757Z  [  OK  ] Listening on Journal Socket (/dev/log).
	2022-06-02T18:01:29.407136645Z  [  OK  ] Listening on Journal Socket.
	2022-06-02T18:01:29.408824571Z           Mounting Huge Pages File System...
	2022-06-02T18:01:29.410258698Z           Mounting Kernel Debug File System...
	2022-06-02T18:01:29.411718845Z           Mounting Kernel Trace File System...
	2022-06-02T18:01:29.413650528Z           Starting Journal Service...
	2022-06-02T18:01:29.415619100Z           Starting Create list of st…odes for the current kernel...
	2022-06-02T18:01:29.417912149Z           Mounting FUSE Control File System...
	2022-06-02T18:01:29.418891040Z           Starting Remount Root and Kernel File Systems...
	2022-06-02T18:01:29.420257553Z           Starting Apply Kernel Variables...
	2022-06-02T18:01:29.422897568Z  [  OK  ] Mounted Huge Pages File System.
	2022-06-02T18:01:29.422917196Z  [  OK  ] Mounted Kernel Debug File System.
	2022-06-02T18:01:29.422921656Z  [  OK  ] Mounted Kernel Trace File System.
	2022-06-02T18:01:29.423359417Z  [  OK  ] Finished Create list of st… nodes for the current kernel.
	2022-06-02T18:01:29.423591575Z  [  OK  ] Mounted FUSE Control File System.
	2022-06-02T18:01:29.426323728Z  [  OK  ] Finished Remount Root and Kernel File Systems.
	2022-06-02T18:01:29.428040044Z           Starting Create System Users...
	2022-06-02T18:01:29.434330773Z           Starting Update UTMP about System Boot/Shutdown...
	2022-06-02T18:01:29.435174793Z  [  OK  ] Finished Apply Kernel Variables.
	2022-06-02T18:01:29.442360738Z  [  OK  ] Finished Update UTMP about System Boot/Shutdown.
	2022-06-02T18:01:29.447907100Z  [  OK  ] Started Journal Service.
	2022-06-02T18:01:29.450095964Z           Starting Flush Journal to Persistent Storage...
	2022-06-02T18:01:29.456675811Z  [  OK  ] Finished Create System Users.
	2022-06-02T18:01:29.457417394Z  [  OK  ] Finished Flush Journal to Persistent Storage.
	2022-06-02T18:01:29.459836512Z           Starting Create Static Device Nodes in /dev...
	2022-06-02T18:01:29.466591212Z  [  OK  ] Finished Create Static Device Nodes in /dev.
	2022-06-02T18:01:29.466671877Z  [  OK  ] Reached target Local File Systems (Pre).
	2022-06-02T18:01:29.466691063Z  [  OK  ] Reached target Local File Systems.
	2022-06-02T18:01:29.466950987Z  [  OK  ] Reached target System Initialization.
	2022-06-02T18:01:29.466967401Z  [  OK  ] Started Daily Cleanup of Temporary Directories.
	2022-06-02T18:01:29.466973806Z  [  OK  ] Reached target Timers.
	2022-06-02T18:01:29.467169244Z  [  OK  ] Listening on BuildKit.
	2022-06-02T18:01:29.467311174Z  [  OK  ] Listening on D-Bus System Message Bus Socket.
	2022-06-02T18:01:29.468453122Z           Starting Docker Socket for the API.
	2022-06-02T18:01:29.472377315Z           Starting Podman API Socket.
	2022-06-02T18:01:29.472751836Z  [  OK  ] Listening on Docker Socket for the API.
	2022-06-02T18:01:29.473823525Z  [  OK  ] Listening on Podman API Socket.
	2022-06-02T18:01:29.473842010Z  [  OK  ] Reached target Sockets.
	2022-06-02T18:01:29.473867362Z  [  OK  ] Reached target Basic System.
	2022-06-02T18:01:29.475188197Z           Starting containerd container runtime...
	2022-06-02T18:01:29.476453906Z  [  OK  ] Started D-Bus System Message Bus.
	2022-06-02T18:01:29.479141940Z           Starting minikube automount...
	2022-06-02T18:01:29.480481559Z           Starting OpenBSD Secure Shell server...
	2022-06-02T18:01:29.497667002Z  [  OK  ] Finished minikube automount.
	2022-06-02T18:01:29.501375783Z  [  OK  ] Started OpenBSD Secure Shell server.
	2022-06-02T18:01:29.543661372Z  [  OK  ] Started containerd container runtime.
	2022-06-02T18:01:29.544900658Z           Starting Docker Application Container Engine...
	2022-06-02T18:01:29.785421600Z  [  OK  ] Started Docker Application Container Engine.
	2022-06-02T18:01:29.785487356Z  [  OK  ] Reached target Multi-User System.
	2022-06-02T18:01:29.785510276Z  [  OK  ] Reached target Graphical Interface.
	2022-06-02T18:01:29.786939655Z           Starting Update UTMP about System Runlevel Changes...
	2022-06-02T18:01:29.794756035Z  [  OK  ] Finished Update UTMP about System Runlevel Changes.
	2022-06-02T18:06:20.422356963Z  [  OK  ] Stopped target Graphical Interface.
	2022-06-02T18:06:20.422436018Z  [  OK  ] Stopped target Multi-User System.
	2022-06-02T18:06:20.422581433Z  [  OK  ] Stopped target Timers.
	2022-06-02T18:06:20.422881739Z  [  OK  ] Stopped Daily Cleanup of Temporary Directories.
	2022-06-02T18:06:20.424712617Z           Stopping D-Bus System Message Bus...
	2022-06-02T18:06:20.424899701Z           Stopping Docker Application Container Engine...
	2022-06-02T18:06:20.426799339Z           Stopping kubelet: The Kubernetes Node Agent...
	2022-06-02T18:06:20.426834962Z           Stopping OpenBSD Secure Shell server...
	2022-06-02T18:06:20.426840665Z  [  OK  ] Stopped D-Bus System Message Bus.
	2022-06-02T18:06:20.427435680Z  [  OK  ] Stopped OpenBSD Secure Shell server.
	2022-06-02T18:06:20.535737875Z  [  OK  ] Stopped kubelet: The Kubernetes Node Agent.
	2022-06-02T18:06:20.738910224Z  [  OK  ] Unmounted /var/lib/docker/…44c6c8055055cd80e0/mounts/shm.
	2022-06-02T18:06:20.741323985Z  [  OK  ] Unmounted /var/lib/docker/…b8faa30b1f595643c440e9/merged.
	2022-06-02T18:06:20.753364496Z  [  OK  ] Unmounted /var/lib/docker/…8aed59381fff95b04e7777/merged.
	2022-06-02T18:06:20.755108447Z  [  OK  ] Unmounted /var/lib/docker/…42d8f1b09dd2a872c64709/merged.
	2022-06-02T18:06:20.758866825Z  [  OK  ] Unmounted /var/lib/docker/…85308d510fed937196/mounts/shm.
	2022-06-02T18:06:20.760968263Z  [  OK  ] Unmounted /var/lib/docker/…a81b7a812f08b35335cfa3/merged.
	2022-06-02T18:06:20.763389079Z  [  OK  ] Unmounted /var/lib/docker/…fdf56ec4b7478274397c68/merged.
	2022-06-02T18:06:20.764169748Z  [  OK  ] Unmounted /var/lib/docker/…a50b5ef22121446055/mounts/shm.
	2022-06-02T18:06:20.764801720Z  [  OK  ] Unmounted /var/lib/docker/…35410772eb30afd3a8ac91/merged.
	2022-06-02T18:06:20.773238079Z  [  OK  ] Unmounted /var/lib/docker/…b278683c353a72039f/mounts/shm.
	2022-06-02T18:06:20.773876178Z  [  OK  ] Unmounted /var/lib/docker/…71df5f43ba561b663d3da6/merged.
	2022-06-02T18:06:20.775281670Z  [  OK  ] Unmounted /var/lib/docker/…ce27bd58d174a85999/mounts/shm.
	2022-06-02T18:06:20.775398954Z  [  OK  ] Unmounted /var/lib/docker/…5aa5e70ae8f1800238d3bc/merged.
	2022-06-02T18:06:20.779602056Z  [  OK  ] Unmounted /var/lib/docker/…fd00c7bdf5042abfc0/mounts/shm.
	2022-06-02T18:06:20.780076856Z  [  OK  ] Unmounted /var/lib/docker/…fff88df05231ac4c06f76d/merged.
	2022-06-02T18:06:21.006048024Z  [  OK  ] Unmounted /run/docker/netns/074dfeac4343.
	2022-06-02T18:06:21.007284622Z  [  OK  ] Unmounted /var/lib/docker/…3654978d85037ce41c/mounts/shm.
	2022-06-02T18:06:21.007549696Z  [  OK  ] Unmounted /var/lib/docker/…da7d44f7bb96377952adb3/merged.
	2022-06-02T18:06:21.054658948Z  [  OK  ] Unmounted /run/docker/netns/2d5d7e3331b1.
	2022-06-02T18:06:21.055761384Z  [  OK  ] Unmounted /var/lib/docker/…a4e40ca0a789f66b56/mounts/shm.
	2022-06-02T18:06:21.056163782Z  [  OK  ] Unmounted /var/lib/docker/…23c1fd001c4078f9857da7/merged.
	2022-06-02T18:06:21.678856952Z  [  OK  ] Unmounted /var/lib/docker/…3a3581c3529f988f655448/merged.
	2022-06-02T18:06:23.696056241Z  [*     ] A stop job is running for Docker Ap…n Container Engine (1s / 1min 28s)
	2022-06-02T18:06:24.196004370Z  M
[**    ] A stop job is running for Docker Ap…n Container Engine (2s / 1min 28s)
	2022-06-02T18:06:24.695988204Z  M
[***   ] A stop job is running for Docker Ap…n Container Engine (2s / 1min 28s)
	2022-06-02T18:06:25.195980621Z  M
[ ***  ] A stop job is running for Docker Ap…n Container Engine (3s / 1min 28s)
	2022-06-02T18:06:25.555165737Z  M
[  *** ] A stop job is running for Docker Ap…n Container Engine (3s / 1min 28s)
	2022-06-02T18:06:25.567548679Z  M
[  OK  ] Unmounted /var/lib/docker/…f714eb4af933c69dcac60a/merged.
	2022-06-02T18:06:27.695984416Z  [   ***] A stop job is running for Docker Ap…n Container Engine (5s / 1min 28s)
	2022-06-02T18:06:28.196007498Z  M
[    **] A stop job is running for Docker Ap…n Container Engine (6s / 1min 28s)
	2022-06-02T18:06:28.695954860Z  M
[     *] A stop job is running for Docker Ap…n Container Engine (6s / 1min 28s)
	2022-06-02T18:06:29.196153973Z  M
[    **] A stop job is running for Docker Ap…n Container Engine (7s / 1min 28s)
	2022-06-02T18:06:29.695957525Z  M
[   ***] A stop job is running for Docker Ap…n Container Engine (7s / 1min 28s)
	2022-06-02T18:06:30.196068812Z  M
[  *** ] A stop job is running for Docker Ap…n Container Engine (8s / 1min 28s)
	2022-06-02T18:06:30.502837291Z  M
[  OK  ] Unmounted /var/lib/docker/…804411b4c63563124878bc/merged.
	2022-06-02T18:06:30.593157553Z  [  OK  ] Unmounted /var/lib/docker/…d5ebf22a1974098a1d2a4c/merged.
	2022-06-02T18:06:30.627055935Z  [  OK  ] Stopped Docker Application Container Engine.
	2022-06-02T18:06:30.627260550Z  [  OK  ] Stopped target Network is Online.
	2022-06-02T18:06:30.627376840Z           Stopping containerd container runtime...
	2022-06-02T18:06:30.628008576Z  [  OK  ] Stopped minikube automount.
	2022-06-02T18:06:30.637241518Z  [  OK  ] Stopped containerd container runtime.
	2022-06-02T18:06:30.637361948Z  [  OK  ] Stopped target Basic System.
	2022-06-02T18:06:30.637467519Z  [  OK  ] Stopped target Paths.
	2022-06-02T18:06:30.637474749Z  [  OK  ] Stopped target Slices.
	2022-06-02T18:06:30.637522754Z  [  OK  ] Stopped target Sockets.
	2022-06-02T18:06:30.638103459Z  [  OK  ] Closed BuildKit.
	2022-06-02T18:06:30.638653861Z  [  OK  ] Closed D-Bus System Message Bus Socket.
	2022-06-02T18:06:30.639146739Z  [  OK  ] Closed Docker Socket for the API.
	2022-06-02T18:06:30.639655907Z  [  OK  ] Closed Podman API Socket.
	2022-06-02T18:06:30.639671463Z  [  OK  ] Stopped target System Initialization.
	2022-06-02T18:06:30.639737655Z  [  OK  ] Stopped target Local Encrypted Volumes.
	2022-06-02T18:06:30.653393436Z  [  OK  ] Stopped Dispatch Password …ts to Console Directory Watch.
	2022-06-02T18:06:30.653427926Z  [  OK  ] Stopped target Local File Systems.
	2022-06-02T18:06:30.654594492Z           Unmounting /data...
	2022-06-02T18:06:30.655297078Z           Unmounting /etc/hostname...
	2022-06-02T18:06:30.655974410Z           Unmounting /etc/hosts...
	2022-06-02T18:06:30.657268816Z           Unmounting /etc/resolv.conf...
	2022-06-02T18:06:30.657778311Z           Unmounting /kind/product_uuid...
	2022-06-02T18:06:30.658613634Z           Unmounting /run/docker/netns/default...
	2022-06-02T18:06:30.659460762Z           Unmounting /tmp/hostpath-provisioner...
	2022-06-02T18:06:30.660348299Z           Unmounting /tmp/hostpath_pv...
	2022-06-02T18:06:30.663954101Z           Unmounting /usr/lib/modules...
	2022-06-02T18:06:30.665432494Z           Unmounting /var/lib/kubele…ected/kube-api-access-m5qcm...
	2022-06-02T18:06:30.667062533Z           Unmounting /var/lib/kubele…ected/kube-api-access-w79cf...
	2022-06-02T18:06:30.668437958Z           Unmounting /var/lib/kubele…ected/kube-api-access-9546w...
	2022-06-02T18:06:30.670016444Z           Unmounting /var/lib/kubele…ected/kube-api-access-z7xs9...
	2022-06-02T18:06:30.671308854Z           Unmounting /var/lib/kubele…ected/kube-api-access-w72rg...
	2022-06-02T18:06:30.672192911Z  [  OK  ] Stopped Apply Kernel Variables.
	2022-06-02T18:06:30.673113612Z           Stopping Update UTMP about System Boot/Shutdown...
	2022-06-02T18:06:30.676820270Z  [  OK  ] Unmounted /data.
	2022-06-02T18:06:30.677931136Z  [  OK  ] Unmounted /etc/hostname.
	2022-06-02T18:06:30.678465453Z  [  OK  ] Unmounted /etc/hosts.
	2022-06-02T18:06:30.679207476Z  [  OK  ] Unmounted /etc/resolv.conf.
	2022-06-02T18:06:30.679900974Z  [  OK  ] Unmounted /kind/product_uuid.
	2022-06-02T18:06:30.680511954Z  [  OK  ] Unmounted /run/docker/netns/default.
	2022-06-02T18:06:30.681343826Z  [  OK  ] Unmounted /tmp/hostpath-provisioner.
	2022-06-02T18:06:30.682074115Z  [  OK  ] Unmounted /tmp/hostpath_pv.
	2022-06-02T18:06:30.682806287Z  [  OK  ] Unmounted /usr/lib/modules.
	2022-06-02T18:06:30.683409074Z  [  OK  ] Unmounted /var/lib/kubelet…ojected/kube-api-access-m5qcm.
	2022-06-02T18:06:30.684037977Z  [  OK  ] Unmounted /var/lib/kubelet…ojected/kube-api-access-w79cf.
	2022-06-02T18:06:30.684711134Z  [  OK  ] Unmounted /var/lib/kubelet…ojected/kube-api-access-9546w.
	2022-06-02T18:06:30.685460185Z  [  OK  ] Unmounted /var/lib/kubelet…ojected/kube-api-access-z7xs9.
	2022-06-02T18:06:30.686047211Z  [  OK  ] Unmounted /var/lib/kubelet…ojected/kube-api-access-w72rg.
	2022-06-02T18:06:30.688161204Z           Unmounting /tmp...
	2022-06-02T18:06:30.688957268Z  [  OK  ] Stopped Update UTMP about System Boot/Shutdown.
	2022-06-02T18:06:30.691118932Z           Unmounting /var...
	2022-06-02T18:06:30.693571544Z  [  OK  ] Unmounted /tmp.
	2022-06-02T18:06:30.693708667Z  [  OK  ] Stopped target Swap.
	2022-06-02T18:06:30.695713687Z  [  OK  ] Unmounted /var.
	2022-06-02T18:06:30.695852243Z  [  OK  ] Stopped target Local File Systems (Pre).
	2022-06-02T18:06:30.695870853Z  [  OK  ] Reached target Unmount All Filesystems.
	2022-06-02T18:06:30.696686660Z  [  OK  ] Stopped Create Static Device Nodes in /dev.
	2022-06-02T18:06:30.698847273Z  [  OK  ] Stopped Create System Users.
	2022-06-02T18:06:30.699433883Z  [  OK  ] Stopped Remount Root and Kernel File Systems.
	2022-06-02T18:06:30.699504247Z  [  OK  ] Reached target Shutdown.
	2022-06-02T18:06:30.699514147Z  [  OK  ] Reached target Final Step.
	2022-06-02T18:06:30.700839956Z           Starting Halt...
	2022-06-02T18:06:30.701154221Z  [  OK  ] Finished Power-Off.
	2022-06-02T18:06:30.701291041Z  [  OK  ] Reached target Power-Off.
	
	-- /stdout --
	I0602 18:06:32.111067  569614 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 18:06:32.226277  569614 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:71 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-02 18:06:32.14318528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 18:06:32.226397  569614 errors.go:98] postmortem docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:71 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-02 18:06:32.14318528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 18:06:32.226469  569614 network_create.go:272] running [docker network inspect default-k8s-different-port-20220602180121-283122] to gather additional debugging logs...
	I0602 18:06:32.226488  569614 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220602180121-283122
	W0602 18:06:32.267203  569614 cli_runner.go:211] docker network inspect default-k8s-different-port-20220602180121-283122 returned with exit code 1
	I0602 18:06:32.267250  569614 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220602180121-283122]: docker network inspect default-k8s-different-port-20220602180121-283122: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220602180121-283122
	I0602 18:06:32.267272  569614 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220602180121-283122]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220602180121-283122
	
	** /stderr **
	I0602 18:06:32.267385  569614 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 18:06:32.386012  569614 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:71 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-02 18:06:32.301928964 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 18:06:32.386452  569614 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220602180121-283122
	I0602 18:06:32.427119  569614 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602180121-283122/config.json ...
	I0602 18:06:32.427350  569614 machine.go:88] provisioning docker machine ...
	I0602 18:06:32.427376  569614 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220602180121-283122"
	I0602 18:06:32.427414  569614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122
	W0602 18:06:32.463584  569614 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122 returned with exit code 1
	I0602 18:06:32.463663  569614 machine.go:91] provisioned docker machine in 36.296873ms
	I0602 18:06:32.463736  569614 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 18:06:32.463792  569614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122
	W0602 18:06:32.499054  569614 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122 returned with exit code 1
	I0602 18:06:32.499216  569614 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 18:06:32.775707  569614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122
	W0602 18:06:32.813156  569614 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122 returned with exit code 1
	I0602 18:06:32.813268  569614 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 18:06:33.354021  569614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122
	W0602 18:06:33.392480  569614 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122 returned with exit code 1
	I0602 18:06:33.392609  569614 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 18:06:34.048240  569614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122
	W0602 18:06:34.085927  569614 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122 returned with exit code 1
	W0602 18:06:34.086055  569614 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0602 18:06:34.086073  569614 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 18:06:34.086151  569614 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 18:06:34.086195  569614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122
	W0602 18:06:34.124968  569614 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122 returned with exit code 1
	I0602 18:06:34.125129  569614 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 18:06:34.356480  569614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122
	W0602 18:06:34.392794  569614 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122 returned with exit code 1
	I0602 18:06:34.392935  569614 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 18:06:34.838322  569614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122
	W0602 18:06:34.872417  569614 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122 returned with exit code 1
	I0602 18:06:34.872556  569614 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 18:06:35.191079  569614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122
	W0602 18:06:35.231526  569614 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122 returned with exit code 1
	I0602 18:06:35.231677  569614 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 18:06:35.786285  569614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122
	W0602 18:06:35.820968  569614 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122 returned with exit code 1
	W0602 18:06:35.821135  569614 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0602 18:06:35.821160  569614 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 18:06:35.821172  569614 fix.go:57] fixHost completed within 3.882277416s
	I0602 18:06:35.821187  569614 start.go:81] releasing machines lock for "default-k8s-different-port-20220602180121-283122", held for 3.882319075s
	W0602 18:06:35.821231  569614 start.go:599] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0602 18:06:35.821367  569614 out.go:239] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 18:06:35.821385  569614 start.go:614] Will try again in 5 seconds ...
	I0602 18:06:40.822212  569614 start.go:352] acquiring machines lock for default-k8s-different-port-20220602180121-283122: {Name:mkdd968f3b4a154d336fd595e63c931cb4826e3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 18:06:40.822362  569614 start.go:356] acquired machines lock for "default-k8s-different-port-20220602180121-283122" in 110.345µs
	I0602 18:06:40.822386  569614 start.go:94] Skipping create...Using existing machine configuration
	I0602 18:06:40.822397  569614 fix.go:55] fixHost starting: 
	I0602 18:06:40.822721  569614 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220602180121-283122 --format={{.State.Status}}
	I0602 18:06:40.861730  569614 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220602180121-283122: state=Stopped err=<nil>
	W0602 18:06:40.861767  569614 fix.go:129] unexpected machine state, will restart: <nil>
	I0602 18:06:40.864231  569614 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220602180121-283122" ...
	I0602 18:06:40.865865  569614 cli_runner.go:164] Run: docker start default-k8s-different-port-20220602180121-283122
	W0602 18:06:40.913174  569614 cli_runner.go:211] docker start default-k8s-different-port-20220602180121-283122 returned with exit code 1
	I0602 18:06:40.913271  569614 cli_runner.go:164] Run: docker inspect default-k8s-different-port-20220602180121-283122
	I0602 18:06:40.946600  569614 errors.go:84] Postmortem inspect ("docker inspect default-k8s-different-port-20220602180121-283122"): -- stdout --
	[
	    {
	        "Id": "c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503",
	        "Created": "2022-06-02T18:01:28.65338102Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "network c9942ce21648740d1266dd81f85155715256e5c54e7d23ee69de43962d50f431 not found",
	            "StartedAt": "2022-06-02T18:01:29.039934789Z",
	            "FinishedAt": "2022-06-02T18:06:30.797930837Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/hostname",
	        "HostsPath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/hosts",
	        "LogPath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503-json.log",
	        "Name": "/default-k8s-different-port-20220602180121-283122",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220602180121-283122:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220602180121-283122",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48-init/diff:/var/lib/docker/overlay2/9517781e97ae29bbe1cf38381a601e806342524da1c5d5dc15ba54ec17f204b6/diff:/var/lib/docker/overlay2/28e14d8724bd82b9b6adde55b66e6d1f1be4fa6c3379f2f44ca1138dd648175a/diff:/var/lib/docker/overlay2/24589dfb9ddc941e7cb22b5e38dec69d1af0a29a330d344152ca7f26010354a8/diff:/var/lib/docker/overlay2/66c45d6273a3507b6e0b6b0753ec4fc82f07160fd115626e24a42bda27bbba86/diff:/var/lib/docker/overlay2/a6f0c661bd178a2a0fef034493a114067e8952b8516e69aff1e1e94a75918bbc/diff:/var/lib/docker/overlay2/5b40a045506372be84bfb01d05b068bc4cc926c5eb9e868226c4c386e8a331c7/diff:/var/lib/docker/overlay2/dbf9a9c6e59628984c95ea93ff4ada66c53b22ca3c2f910f70efba895cf40718/diff:/var/lib/docker/overlay2/5eea619393ee989a35a37a13b0633685af91729e1b74f6e73fd94b63c9d7d1fa/diff:/var/lib/docker/overlay2/24de6fbece5386c3658fa3e2c01723e6c1c24fb837d53b3d5a8c75b64ce2cd06/diff:/var/lib/docker/overlay2/04593e
f7d3bf72ee8b7439b0326af5b9818930e0dd48c35dba97e3b4ae39f4ee/diff:/var/lib/docker/overlay2/09310697cecb557085bb4d5dfc810a82fc533759ad98179263b2fc94153a64fa/diff:/var/lib/docker/overlay2/ee75c420f0615a19a5d44cfe042bdca5a2cf0f1c541168ba424140cf2aa7f98d/diff:/var/lib/docker/overlay2/fc8643a3e2060ab0365cd1227ab9ebbd5a573e6ebf8379bc6b64a756a1f66562/diff:/var/lib/docker/overlay2/0f8571cab87b35114fab495cbde9af8ac7e58be99412da82a79c90239e2227e1/diff:/var/lib/docker/overlay2/e1a5ae079e4f566081e5e786955257c23809c8719ea6b4593c8c91ac00380379/diff:/var/lib/docker/overlay2/373565e63dec12c85906e80b4a030f7d852992a756749be9424ac17802d589c0/diff:/var/lib/docker/overlay2/b886b42aa5e9f06b4381a6f58d55338774a2d02af2e91e3de156aa4bca4e4be0/diff:/var/lib/docker/overlay2/dee534a744ce54d08ca752b7f64e1772c063d4ba1e008c5a9773188bb009c377/diff:/var/lib/docker/overlay2/c8cf6667f29ffc5417675e4bd8eee6a91468de13292e3cb6ae2c70d3ad56d5fd/diff:/var/lib/docker/overlay2/072b57a4228c3928bdb27162809b102b485bb94c5f726c9a3e9892267ed30839/diff:/var/lib/d
ocker/overlay2/491d5b51bf6de1cfc4eef288c454f54e3b424e3b83fd9dc216c35891c7923f55/diff:/var/lib/docker/overlay2/78f167c2cc286e4058ef09d5a719437afddde854aac41b8d054567865fa61024/diff:/var/lib/docker/overlay2/10763b3ba26983ed9426ccca08bfceb7037d95c69827847e2ff755b4f2c5a4ab/diff:/var/lib/docker/overlay2/e559098efb21164d96bd0aee50fee9f48310df68887c5c884f4ba3f3fc895260/diff:/var/lib/docker/overlay2/555ef1505fe2b1ab4244f073e9212f269d70d8cd6ca0cd517d21cc80cd25b4c1/diff:/var/lib/docker/overlay2/0201da7c907a1f8f564cda48c3f3a403e484d901739d2584f1343a7907685c5e/diff:/var/lib/docker/overlay2/0abae5d84f8497a5114349421415431c43a34015ad98f6c9c3140143d2f1f383/diff:/var/lib/docker/overlay2/80c37982eef2dc00bd08f208551df377570134af84008156f6a500855fd2ba10/diff:/var/lib/docker/overlay2/cc38481bb11be97cd54c0032709337a491580c2f0ae6d3f7eec4c3616f8e3126/diff:/var/lib/docker/overlay2/dda978aeb4c7317bafbc077229c4417a4d8e7658d9604803c401448b471eab5e/diff:/var/lib/docker/overlay2/7c230ae0970689e2932cdaadc9e3f86ec8215fbf384c9eb59cfb958daf7
3197e/diff:/var/lib/docker/overlay2/145db51b169211e46c3bbab5ca998b2a019a4a813801acc2dae9b2dc09bc2ecc/diff:/var/lib/docker/overlay2/e604163672a013549c59bc042d1b932a3dad9e104d7079fbbb3ca24c29351c0d/diff:/var/lib/docker/overlay2/5a304d8680fd3fb594754173a70646923e2715ed21c98b8c43e1f1ef2d7abd6a/diff:/var/lib/docker/overlay2/b77f00351ddd70be2f3d9dd622b78e1a2937c4ba71585b357fef88d285eb762e/diff:/var/lib/docker/overlay2/196b6ae042b9b08748253c16f5e2b56d7f9a25077008d3d447cbc75447ed5e2a/diff:/var/lib/docker/overlay2/8c7c3fbb0bed8f7440d11104257806c70d16fd7d432311fc208d5012a439e5a1/diff:/var/lib/docker/overlay2/10604aa3b8ed5698c9fa8f55e00b9a9dc27d6b99135255d6c756fc080e615fc6/diff:/var/lib/docker/overlay2/c6602a17e9b0ab1d84297109204b48e60f293952fbc1feaf362895ec86860ead/diff:/var/lib/docker/overlay2/3828c5e50db9be2b4bc30fce040392ea735da5eaa819ad0848cb4fb79fc367da/diff:/var/lib/docker/overlay2/708816f20892c0e4b94cbe8e80b169fff54ffd870bcde395bd8163dd03b25d0f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220602180121-283122",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220602180121-283122/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220602180121-283122",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220602180121-283122",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220602180121-283122",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3989e814c0ac91c6e28c041e22d15c3ab28ed2a1bbec4bcc9b0a154e6c83dc06",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/3989e814c0ac",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220602180121-283122": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c65299b1d424",
	                        "default-k8s-different-port-20220602180121-283122"
	                    ],
	                    "NetworkID": "c9942ce21648740d1266dd81f85155715256e5c54e7d23ee69de43962d50f431",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]
	
	-- /stdout --
	I0602 18:06:40.946683  569614 cli_runner.go:164] Run: docker logs --timestamps --details default-k8s-different-port-20220602180121-283122
	I0602 18:06:40.985754  569614 errors.go:91] Postmortem logs ("docker logs --timestamps --details default-k8s-different-port-20220602180121-283122"): -- stdout --
	2022-06-02T18:01:29.039763218Z  + userns=
	2022-06-02T18:01:29.039808871Z  + grep -Eqv '0[[:space:]]+0[[:space:]]+4294967295' /proc/self/uid_map
	2022-06-02T18:01:29.043131676Z  + validate_userns
	2022-06-02T18:01:29.043156229Z  + [[ -z '' ]]
	2022-06-02T18:01:29.043160558Z  + return
	2022-06-02T18:01:29.043163822Z  + configure_containerd
	2022-06-02T18:01:29.043167291Z  + local snapshotter=
	2022-06-02T18:01:29.043174761Z  + [[ -n '' ]]
	2022-06-02T18:01:29.043178841Z  + [[ -z '' ]]
	2022-06-02T18:01:29.043596890Z  ++ stat -f -c %T /kind
	2022-06-02T18:01:29.044841327Z  + '[[overlayfs' == zfs ']]'
	2022-06-02T18:01:29.045209626Z  /usr/local/bin/entrypoint: line 112: [[overlayfs: command not found
	2022-06-02T18:01:29.045441945Z  + [[ -n '' ]]
	2022-06-02T18:01:29.045455930Z  + configure_proxy
	2022-06-02T18:01:29.045460082Z  + mkdir -p /etc/systemd/system.conf.d/
	2022-06-02T18:01:29.047278221Z  + [[ ! -z '' ]]
	2022-06-02T18:01:29.047292665Z  + cat
	2022-06-02T18:01:29.048375191Z  + fix_kmsg
	2022-06-02T18:01:29.048387449Z  + [[ ! -e /dev/kmsg ]]
	2022-06-02T18:01:29.048391595Z  + fix_mount
	2022-06-02T18:01:29.048395027Z  + echo 'INFO: ensuring we can execute mount/umount even with userns-remap'
	2022-06-02T18:01:29.048398880Z  INFO: ensuring we can execute mount/umount even with userns-remap
	2022-06-02T18:01:29.048839165Z  ++ which mount
	2022-06-02T18:01:29.050227870Z  ++ which umount
	2022-06-02T18:01:29.051152303Z  + chown root:root /usr/bin/mount /usr/bin/umount
	2022-06-02T18:01:29.057711578Z  ++ which mount
	2022-06-02T18:01:29.059216309Z  ++ which umount
	2022-06-02T18:01:29.060223981Z  + chmod -s /usr/bin/mount /usr/bin/umount
	2022-06-02T18:01:29.062025798Z  +++ which mount
	2022-06-02T18:01:29.062872200Z  ++ stat -f -c %T /usr/bin/mount
	2022-06-02T18:01:29.064034650Z  + [[ overlayfs == \a\u\f\s ]]
	2022-06-02T18:01:29.064050737Z  + echo 'INFO: remounting /sys read-only'
	2022-06-02T18:01:29.064054924Z  INFO: remounting /sys read-only
	2022-06-02T18:01:29.064058959Z  + mount -o remount,ro /sys
	2022-06-02T18:01:29.066140935Z  + echo 'INFO: making mounts shared'
	2022-06-02T18:01:29.066158066Z  INFO: making mounts shared
	2022-06-02T18:01:29.066162494Z  + mount --make-rshared /
	2022-06-02T18:01:29.067684076Z  + retryable_fix_cgroup
	2022-06-02T18:01:29.068068866Z  ++ seq 0 10
	2022-06-02T18:01:29.068839428Z  + for i in $(seq 0 10)
	2022-06-02T18:01:29.068854693Z  + fix_cgroup
	2022-06-02T18:01:29.068868320Z  + [[ -f /sys/fs/cgroup/cgroup.controllers ]]
	2022-06-02T18:01:29.068886421Z  + echo 'INFO: detected cgroup v1'
	2022-06-02T18:01:29.068890147Z  INFO: detected cgroup v1
	2022-06-02T18:01:29.068897825Z  + local current_cgroup
	2022-06-02T18:01:29.069718506Z  ++ grep -E '^[^:]*:([^:]*,)?cpu(,[^,:]*)?:.*' /proc/self/cgroup
	2022-06-02T18:01:29.069806519Z  ++ cut -d: -f3
	2022-06-02T18:01:29.071371976Z  + current_cgroup=/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.071388046Z  + '[' /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 = / ']'
	2022-06-02T18:01:29.071392561Z  + echo 'WARN: cgroupns not enabled! Please use cgroup v2, or cgroup v1 with cgroupns enabled.'
	2022-06-02T18:01:29.071396064Z  WARN: cgroupns not enabled! Please use cgroup v2, or cgroup v1 with cgroupns enabled.
	2022-06-02T18:01:29.071399658Z  + echo 'INFO: fix cgroup mounts for all subsystems'
	2022-06-02T18:01:29.071403061Z  INFO: fix cgroup mounts for all subsystems
	2022-06-02T18:01:29.071452466Z  + local cgroup_subsystems
	2022-06-02T18:01:29.072405715Z  ++ findmnt -lun -o source,target -t cgroup
	2022-06-02T18:01:29.072419959Z  ++ grep /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.073555183Z  ++ awk '{print $2}'
	2022-06-02T18:01:29.074852251Z  + cgroup_subsystems='/sys/fs/cgroup/systemd
	2022-06-02T18:01:29.074867993Z  /sys/fs/cgroup/blkio
	2022-06-02T18:01:29.074872556Z  /sys/fs/cgroup/rdma
	2022-06-02T18:01:29.074875936Z  /sys/fs/cgroup/cpu,cpuacct
	2022-06-02T18:01:29.074879269Z  /sys/fs/cgroup/cpuset
	2022-06-02T18:01:29.074882679Z  /sys/fs/cgroup/pids
	2022-06-02T18:01:29.074886391Z  /sys/fs/cgroup/hugetlb
	2022-06-02T18:01:29.074890046Z  /sys/fs/cgroup/devices
	2022-06-02T18:01:29.074893244Z  /sys/fs/cgroup/memory
	2022-06-02T18:01:29.074895452Z  /sys/fs/cgroup/net_cls,net_prio
	2022-06-02T18:01:29.074897509Z  /sys/fs/cgroup/freezer
	2022-06-02T18:01:29.074899502Z  /sys/fs/cgroup/perf_event'
	2022-06-02T18:01:29.074901499Z  + local unsupported_cgroups
	2022-06-02T18:01:29.077265630Z  ++ findmnt -lun -o source,target -t cgroup
	2022-06-02T18:01:29.077284493Z  ++ grep_allow_nomatch -v /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.077288954Z  ++ grep -v /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.077580628Z  ++ awk '{print $2}'
	2022-06-02T18:01:29.079156039Z  ++ [[ 1 == 1 ]]
	2022-06-02T18:01:29.080028697Z  + unsupported_cgroups=
	2022-06-02T18:01:29.080042623Z  + '[' -n '' ']'
	2022-06-02T18:01:29.080047194Z  + local cgroup_mounts
	2022-06-02T18:01:29.080509206Z  ++ grep -E -o '/[[:alnum:]].* /sys/fs/cgroup.*.*cgroup' /proc/self/mountinfo
	2022-06-02T18:01:29.082816196Z  + cgroup_mounts='/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:378 master:9 - cgroup cgroup
	2022-06-02T18:01:29.082834531Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:379 master:16 - cgroup cgroup
	2022-06-02T18:01:29.082839599Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime shared:380 master:17 - cgroup cgroup
	2022-06-02T18:01:29.082843150Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:381 master:18 - cgroup cgroup
	2022-06-02T18:01:29.082846751Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:383 master:19 - cgroup cgroup
	2022-06-02T18:01:29.082850331Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:388 master:20 - cgroup cgroup
	2022-06-02T18:01:29.082854001Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:389 master:21 - cgroup cgroup
	2022-06-02T18:01:29.082858232Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:390 master:22 - cgroup cgroup
	2022-06-02T18:01:29.082861905Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:391 master:23 - cgroup cgroup
	2022-06-02T18:01:29.082865679Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:392 master:24 - cgroup cgroup
	2022-06-02T18:01:29.082869457Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:393 master:25 - cgroup cgroup
	2022-06-02T18:01:29.082873329Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:395 master:26 - cgroup cgroup'
	2022-06-02T18:01:29.082878095Z  + [[ -n /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:378 master:9 - cgroup cgroup
	2022-06-02T18:01:29.082881803Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:379 master:16 - cgroup cgroup
	2022-06-02T18:01:29.082885080Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime shared:380 master:17 - cgroup cgroup
	2022-06-02T18:01:29.082888357Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:381 master:18 - cgroup cgroup
	2022-06-02T18:01:29.082891497Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:383 master:19 - cgroup cgroup
	2022-06-02T18:01:29.082906694Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:388 master:20 - cgroup cgroup
	2022-06-02T18:01:29.082910816Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:389 master:21 - cgroup cgroup
	2022-06-02T18:01:29.082914287Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:390 master:22 - cgroup cgroup
	2022-06-02T18:01:29.082917282Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:391 master:23 - cgroup cgroup
	2022-06-02T18:01:29.082920897Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:392 master:24 - cgroup cgroup
	2022-06-02T18:01:29.082924515Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:393 master:25 - cgroup cgroup
	2022-06-02T18:01:29.082928009Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:395 master:26 - cgroup cgroup ]]
	2022-06-02T18:01:29.082931609Z  + local mount_root
	2022-06-02T18:01:29.083657574Z  ++ head -n 1
	2022-06-02T18:01:29.083802749Z  ++ cut '-d ' -f1
	2022-06-02T18:01:29.085146587Z  + mount_root=/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.085932747Z  ++ echo '/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:378 master:9 - cgroup cgroup
	2022-06-02T18:01:29.086225768Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:379 master:16 - cgroup cgroup
	2022-06-02T18:01:29.086232271Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/rdma rw,nosuid,nodev,noexec,relatime shared:380 master:17 - cgroup cgroup
	2022-06-02T18:01:29.086234927Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:381 master:18 - cgroup cgroup
	2022-06-02T18:01:29.086237382Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:383 master:19 - cgroup cgroup
	2022-06-02T18:01:29.086240454Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/pids rw,nosuid,nodev,noexec,relatime shared:388 master:20 - cgroup cgroup
	2022-06-02T18:01:29.086242903Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/hugetlb rw,nosuid,nodev,noexec,relatime shared:389 master:21 - cgroup cgroup
	2022-06-02T18:01:29.086245264Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:390 master:22 - cgroup cgroup
	2022-06-02T18:01:29.086247665Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:391 master:23 - cgroup cgroup
	2022-06-02T18:01:29.086261274Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/net_cls,net_prio rw,nosuid,nodev,noexec,relatime shared:392 master:24 - cgroup cgroup
	2022-06-02T18:01:29.086263921Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:393 master:25 - cgroup cgroup
	2022-06-02T18:01:29.086266186Z  /docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503 /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:395 master:26 - cgroup cgroup'
	2022-06-02T18:01:29.086269528Z  ++ cut '-d ' -f 2
	2022-06-02T18:01:29.087144982Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T18:01:29.087160489Z  + local target=/sys/fs/cgroup/systemd/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.087164281Z  + findmnt /sys/fs/cgroup/systemd/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.089146043Z  + mkdir -p /sys/fs/cgroup/systemd/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.090306813Z  + mount --bind /sys/fs/cgroup/systemd /sys/fs/cgroup/systemd/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.091962127Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T18:01:29.091976378Z  + local target=/sys/fs/cgroup/blkio/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.091986075Z  + findmnt /sys/fs/cgroup/blkio/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.094712159Z  + mkdir -p /sys/fs/cgroup/blkio/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.096024519Z  + mount --bind /sys/fs/cgroup/blkio /sys/fs/cgroup/blkio/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.097441634Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T18:01:29.097458429Z  + local target=/sys/fs/cgroup/rdma/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.097462950Z  + findmnt /sys/fs/cgroup/rdma/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.099280363Z  + mkdir -p /sys/fs/cgroup/rdma/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.100422130Z  + mount --bind /sys/fs/cgroup/rdma /sys/fs/cgroup/rdma/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.102093423Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T18:01:29.102111378Z  + local target=/sys/fs/cgroup/cpu,cpuacct/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.102115800Z  + findmnt /sys/fs/cgroup/cpu,cpuacct/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.104384013Z  + mkdir -p /sys/fs/cgroup/cpu,cpuacct/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.105837434Z  + mount --bind /sys/fs/cgroup/cpu,cpuacct /sys/fs/cgroup/cpu,cpuacct/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.107478835Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T18:01:29.107494765Z  + local target=/sys/fs/cgroup/cpuset/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.107556704Z  + findmnt /sys/fs/cgroup/cpuset/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.110130312Z  + mkdir -p /sys/fs/cgroup/cpuset/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.137915941Z  + mount --bind /sys/fs/cgroup/cpuset /sys/fs/cgroup/cpuset/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.139418172Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T18:01:29.139441310Z  + local target=/sys/fs/cgroup/pids/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.139446247Z  + findmnt /sys/fs/cgroup/pids/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.141865239Z  + mkdir -p /sys/fs/cgroup/pids/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.143399838Z  + mount --bind /sys/fs/cgroup/pids /sys/fs/cgroup/pids/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.144898706Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T18:01:29.144920310Z  + local target=/sys/fs/cgroup/hugetlb/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.144925030Z  + findmnt /sys/fs/cgroup/hugetlb/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.147091310Z  + mkdir -p /sys/fs/cgroup/hugetlb/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.148307905Z  + mount --bind /sys/fs/cgroup/hugetlb /sys/fs/cgroup/hugetlb/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.149822485Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T18:01:29.149843406Z  + local target=/sys/fs/cgroup/devices/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.149848272Z  + findmnt /sys/fs/cgroup/devices/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.151915288Z  + mkdir -p /sys/fs/cgroup/devices/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.153114598Z  + mount --bind /sys/fs/cgroup/devices /sys/fs/cgroup/devices/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.154453916Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T18:01:29.154466522Z  + local target=/sys/fs/cgroup/memory/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.154480465Z  + findmnt /sys/fs/cgroup/memory/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.156225539Z  + mkdir -p /sys/fs/cgroup/memory/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.157369269Z  + mount --bind /sys/fs/cgroup/memory /sys/fs/cgroup/memory/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.158724076Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T18:01:29.158743186Z  + local target=/sys/fs/cgroup/net_cls,net_prio/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.158749761Z  + findmnt /sys/fs/cgroup/net_cls,net_prio/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.161726755Z  + mkdir -p /sys/fs/cgroup/net_cls,net_prio/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.163113343Z  + mount --bind /sys/fs/cgroup/net_cls,net_prio /sys/fs/cgroup/net_cls,net_prio/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.165269668Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T18:01:29.165288549Z  + local target=/sys/fs/cgroup/freezer/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.165293227Z  + findmnt /sys/fs/cgroup/freezer/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.167765308Z  + mkdir -p /sys/fs/cgroup/freezer/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.169057041Z  + mount --bind /sys/fs/cgroup/freezer /sys/fs/cgroup/freezer/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.170752265Z  + for mount_point in $(echo "${cgroup_mounts}" | cut -d' ' -f 2)
	2022-06-02T18:01:29.170766283Z  + local target=/sys/fs/cgroup/perf_event/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.170769350Z  + findmnt /sys/fs/cgroup/perf_event/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.172757082Z  + mkdir -p /sys/fs/cgroup/perf_event/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.173831049Z  + mount --bind /sys/fs/cgroup/perf_event /sys/fs/cgroup/perf_event/docker/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503
	2022-06-02T18:01:29.175207466Z  + mount --make-rprivate /sys/fs/cgroup
	2022-06-02T18:01:29.177164182Z  + echo '/sys/fs/cgroup/systemd
	2022-06-02T18:01:29.177187926Z  /sys/fs/cgroup/blkio
	2022-06-02T18:01:29.177192392Z  /sys/fs/cgroup/rdma
	2022-06-02T18:01:29.177195842Z  /sys/fs/cgroup/cpu,cpuacct
	2022-06-02T18:01:29.177198902Z  /sys/fs/cgroup/cpuset
	2022-06-02T18:01:29.177202367Z  /sys/fs/cgroup/pids
	2022-06-02T18:01:29.177220227Z  /sys/fs/cgroup/hugetlb
	2022-06-02T18:01:29.177223428Z  /sys/fs/cgroup/devices
	2022-06-02T18:01:29.177226365Z  /sys/fs/cgroup/memory
	2022-06-02T18:01:29.177229728Z  /sys/fs/cgroup/net_cls,net_prio
	2022-06-02T18:01:29.177232715Z  /sys/fs/cgroup/freezer
	2022-06-02T18:01:29.177236010Z  /sys/fs/cgroup/perf_event'
	2022-06-02T18:01:29.177246576Z  + IFS=
	2022-06-02T18:01:29.177249807Z  + read -r subsystem
	2022-06-02T18:01:29.177760415Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/systemd
	2022-06-02T18:01:29.177776349Z  + local cgroup_root=/kubelet
	2022-06-02T18:01:29.177780337Z  + local subsystem=/sys/fs/cgroup/systemd
	2022-06-02T18:01:29.177783685Z  + '[' -z /kubelet ']'
	2022-06-02T18:01:29.177786769Z  + mkdir -p /sys/fs/cgroup/systemd//kubelet
	2022-06-02T18:01:29.179120179Z  + '[' /sys/fs/cgroup/systemd == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.179138452Z  + mount --bind /sys/fs/cgroup/systemd//kubelet /sys/fs/cgroup/systemd//kubelet
	2022-06-02T18:01:29.180571532Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/systemd
	2022-06-02T18:01:29.180588028Z  + local cgroup_root=/kubelet.slice
	2022-06-02T18:01:29.180592028Z  + local subsystem=/sys/fs/cgroup/systemd
	2022-06-02T18:01:29.180595091Z  + '[' -z /kubelet.slice ']'
	2022-06-02T18:01:29.180598646Z  + mkdir -p /sys/fs/cgroup/systemd//kubelet.slice
	2022-06-02T18:01:29.182121519Z  + '[' /sys/fs/cgroup/systemd == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.182709656Z  + mount --bind /sys/fs/cgroup/systemd//kubelet.slice /sys/fs/cgroup/systemd//kubelet.slice
	2022-06-02T18:01:29.184406295Z  + IFS=
	2022-06-02T18:01:29.184422892Z  + read -r subsystem
	2022-06-02T18:01:29.184427101Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/blkio
	2022-06-02T18:01:29.184430642Z  + local cgroup_root=/kubelet
	2022-06-02T18:01:29.184434128Z  + local subsystem=/sys/fs/cgroup/blkio
	2022-06-02T18:01:29.184437480Z  + '[' -z /kubelet ']'
	2022-06-02T18:01:29.184441210Z  + mkdir -p /sys/fs/cgroup/blkio//kubelet
	2022-06-02T18:01:29.185637645Z  + '[' /sys/fs/cgroup/blkio == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.185653509Z  + mount --bind /sys/fs/cgroup/blkio//kubelet /sys/fs/cgroup/blkio//kubelet
	2022-06-02T18:01:29.187365903Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/blkio
	2022-06-02T18:01:29.187381214Z  + local cgroup_root=/kubelet.slice
	2022-06-02T18:01:29.187385584Z  + local subsystem=/sys/fs/cgroup/blkio
	2022-06-02T18:01:29.187389572Z  + '[' -z /kubelet.slice ']'
	2022-06-02T18:01:29.187564227Z  + mkdir -p /sys/fs/cgroup/blkio//kubelet.slice
	2022-06-02T18:01:29.188958319Z  + '[' /sys/fs/cgroup/blkio == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.188985214Z  + mount --bind /sys/fs/cgroup/blkio//kubelet.slice /sys/fs/cgroup/blkio//kubelet.slice
	2022-06-02T18:01:29.192597279Z  + IFS=
	2022-06-02T18:01:29.192615219Z  + read -r subsystem
	2022-06-02T18:01:29.192619339Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/rdma
	2022-06-02T18:01:29.192657914Z  + local cgroup_root=/kubelet
	2022-06-02T18:01:29.192674053Z  + local subsystem=/sys/fs/cgroup/rdma
	2022-06-02T18:01:29.192676982Z  + '[' -z /kubelet ']'
	2022-06-02T18:01:29.192679275Z  + mkdir -p /sys/fs/cgroup/rdma//kubelet
	2022-06-02T18:01:29.194145746Z  + '[' /sys/fs/cgroup/rdma == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.194162398Z  + mount --bind /sys/fs/cgroup/rdma//kubelet /sys/fs/cgroup/rdma//kubelet
	2022-06-02T18:01:29.195577859Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/rdma
	2022-06-02T18:01:29.195595824Z  + local cgroup_root=/kubelet.slice
	2022-06-02T18:01:29.195600244Z  + local subsystem=/sys/fs/cgroup/rdma
	2022-06-02T18:01:29.195603946Z  + '[' -z /kubelet.slice ']'
	2022-06-02T18:01:29.195607656Z  + mkdir -p /sys/fs/cgroup/rdma//kubelet.slice
	2022-06-02T18:01:29.196632358Z  + '[' /sys/fs/cgroup/rdma == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.196646035Z  + mount --bind /sys/fs/cgroup/rdma//kubelet.slice /sys/fs/cgroup/rdma//kubelet.slice
	2022-06-02T18:01:29.198116187Z  + IFS=
	2022-06-02T18:01:29.198134884Z  + read -r subsystem
	2022-06-02T18:01:29.198139225Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpu,cpuacct
	2022-06-02T18:01:29.198143142Z  + local cgroup_root=/kubelet
	2022-06-02T18:01:29.198146898Z  + local subsystem=/sys/fs/cgroup/cpu,cpuacct
	2022-06-02T18:01:29.198154972Z  + '[' -z /kubelet ']'
	2022-06-02T18:01:29.198158880Z  + mkdir -p /sys/fs/cgroup/cpu,cpuacct//kubelet
	2022-06-02T18:01:29.199366707Z  + '[' /sys/fs/cgroup/cpu,cpuacct == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.199382497Z  + mount --bind /sys/fs/cgroup/cpu,cpuacct//kubelet /sys/fs/cgroup/cpu,cpuacct//kubelet
	2022-06-02T18:01:29.200860053Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/cpu,cpuacct
	2022-06-02T18:01:29.200876151Z  + local cgroup_root=/kubelet.slice
	2022-06-02T18:01:29.200880474Z  + local subsystem=/sys/fs/cgroup/cpu,cpuacct
	2022-06-02T18:01:29.200884161Z  + '[' -z /kubelet.slice ']'
	2022-06-02T18:01:29.200887658Z  + mkdir -p /sys/fs/cgroup/cpu,cpuacct//kubelet.slice
	2022-06-02T18:01:29.202203786Z  + '[' /sys/fs/cgroup/cpu,cpuacct == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.202226790Z  + mount --bind /sys/fs/cgroup/cpu,cpuacct//kubelet.slice /sys/fs/cgroup/cpu,cpuacct//kubelet.slice
	2022-06-02T18:01:29.203680615Z  + IFS=
	2022-06-02T18:01:29.203698566Z  + read -r subsystem
	2022-06-02T18:01:29.203718856Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/cpuset
	2022-06-02T18:01:29.203723726Z  + local cgroup_root=/kubelet
	2022-06-02T18:01:29.203727371Z  + local subsystem=/sys/fs/cgroup/cpuset
	2022-06-02T18:01:29.203744975Z  + '[' -z /kubelet ']'
	2022-06-02T18:01:29.203748997Z  + mkdir -p /sys/fs/cgroup/cpuset//kubelet
	2022-06-02T18:01:29.237608820Z  + '[' /sys/fs/cgroup/cpuset == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.237637790Z  + cat /sys/fs/cgroup/cpuset/cpuset.cpus
	2022-06-02T18:01:29.239921660Z  + cat /sys/fs/cgroup/cpuset/cpuset.mems
	2022-06-02T18:01:29.241292873Z  + mount --bind /sys/fs/cgroup/cpuset//kubelet /sys/fs/cgroup/cpuset//kubelet
	2022-06-02T18:01:29.243779504Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/cpuset
	2022-06-02T18:01:29.243801445Z  + local cgroup_root=/kubelet.slice
	2022-06-02T18:01:29.243806119Z  + local subsystem=/sys/fs/cgroup/cpuset
	2022-06-02T18:01:29.243818479Z  + '[' -z /kubelet.slice ']'
	2022-06-02T18:01:29.243823695Z  + mkdir -p /sys/fs/cgroup/cpuset//kubelet.slice
	2022-06-02T18:01:29.247186658Z  + '[' /sys/fs/cgroup/cpuset == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.247217554Z  + cat /sys/fs/cgroup/cpuset/cpuset.cpus
	2022-06-02T18:01:29.249189687Z  + cat /sys/fs/cgroup/cpuset/cpuset.mems
	2022-06-02T18:01:29.250073305Z  + mount --bind /sys/fs/cgroup/cpuset//kubelet.slice /sys/fs/cgroup/cpuset//kubelet.slice
	2022-06-02T18:01:29.251569478Z  + IFS=
	2022-06-02T18:01:29.251589131Z  + read -r subsystem
	2022-06-02T18:01:29.251593237Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/pids
	2022-06-02T18:01:29.251596674Z  + local cgroup_root=/kubelet
	2022-06-02T18:01:29.251600251Z  + local subsystem=/sys/fs/cgroup/pids
	2022-06-02T18:01:29.251603378Z  + '[' -z /kubelet ']'
	2022-06-02T18:01:29.251606711Z  + mkdir -p /sys/fs/cgroup/pids//kubelet
	2022-06-02T18:01:29.252740303Z  + '[' /sys/fs/cgroup/pids == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.252754785Z  + mount --bind /sys/fs/cgroup/pids//kubelet /sys/fs/cgroup/pids//kubelet
	2022-06-02T18:01:29.254111787Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/pids
	2022-06-02T18:01:29.254126404Z  + local cgroup_root=/kubelet.slice
	2022-06-02T18:01:29.254129309Z  + local subsystem=/sys/fs/cgroup/pids
	2022-06-02T18:01:29.254131624Z  + '[' -z /kubelet.slice ']'
	2022-06-02T18:01:29.254134036Z  + mkdir -p /sys/fs/cgroup/pids//kubelet.slice
	2022-06-02T18:01:29.255392031Z  + '[' /sys/fs/cgroup/pids == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.255408263Z  + mount --bind /sys/fs/cgroup/pids//kubelet.slice /sys/fs/cgroup/pids//kubelet.slice
	2022-06-02T18:01:29.256812147Z  + IFS=
	2022-06-02T18:01:29.256841585Z  + read -r subsystem
	2022-06-02T18:01:29.256845927Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/hugetlb
	2022-06-02T18:01:29.256930513Z  + local cgroup_root=/kubelet
	2022-06-02T18:01:29.256944065Z  + local subsystem=/sys/fs/cgroup/hugetlb
	2022-06-02T18:01:29.256947127Z  + '[' -z /kubelet ']'
	2022-06-02T18:01:29.256949358Z  + mkdir -p /sys/fs/cgroup/hugetlb//kubelet
	2022-06-02T18:01:29.258164309Z  + '[' /sys/fs/cgroup/hugetlb == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.258180158Z  + mount --bind /sys/fs/cgroup/hugetlb//kubelet /sys/fs/cgroup/hugetlb//kubelet
	2022-06-02T18:01:29.259446400Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/hugetlb
	2022-06-02T18:01:29.259459305Z  + local cgroup_root=/kubelet.slice
	2022-06-02T18:01:29.259463348Z  + local subsystem=/sys/fs/cgroup/hugetlb
	2022-06-02T18:01:29.259467128Z  + '[' -z /kubelet.slice ']'
	2022-06-02T18:01:29.259470930Z  + mkdir -p /sys/fs/cgroup/hugetlb//kubelet.slice
	2022-06-02T18:01:29.260777885Z  + '[' /sys/fs/cgroup/hugetlb == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.260795760Z  + mount --bind /sys/fs/cgroup/hugetlb//kubelet.slice /sys/fs/cgroup/hugetlb//kubelet.slice
	2022-06-02T18:01:29.262131205Z  + IFS=
	2022-06-02T18:01:29.262149142Z  + read -r subsystem
	2022-06-02T18:01:29.262153384Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/devices
	2022-06-02T18:01:29.262157515Z  + local cgroup_root=/kubelet
	2022-06-02T18:01:29.262161127Z  + local subsystem=/sys/fs/cgroup/devices
	2022-06-02T18:01:29.262164486Z  + '[' -z /kubelet ']'
	2022-06-02T18:01:29.262218521Z  + mkdir -p /sys/fs/cgroup/devices//kubelet
	2022-06-02T18:01:29.263394209Z  + '[' /sys/fs/cgroup/devices == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.263405702Z  + mount --bind /sys/fs/cgroup/devices//kubelet /sys/fs/cgroup/devices//kubelet
	2022-06-02T18:01:29.264772219Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/devices
	2022-06-02T18:01:29.264788042Z  + local cgroup_root=/kubelet.slice
	2022-06-02T18:01:29.264792006Z  + local subsystem=/sys/fs/cgroup/devices
	2022-06-02T18:01:29.264795401Z  + '[' -z /kubelet.slice ']'
	2022-06-02T18:01:29.264799072Z  + mkdir -p /sys/fs/cgroup/devices//kubelet.slice
	2022-06-02T18:01:29.265986529Z  + '[' /sys/fs/cgroup/devices == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.266001907Z  + mount --bind /sys/fs/cgroup/devices//kubelet.slice /sys/fs/cgroup/devices//kubelet.slice
	2022-06-02T18:01:29.267281308Z  + IFS=
	2022-06-02T18:01:29.267291915Z  + read -r subsystem
	2022-06-02T18:01:29.267295594Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/memory
	2022-06-02T18:01:29.267299004Z  + local cgroup_root=/kubelet
	2022-06-02T18:01:29.267314907Z  + local subsystem=/sys/fs/cgroup/memory
	2022-06-02T18:01:29.267318303Z  + '[' -z /kubelet ']'
	2022-06-02T18:01:29.267321966Z  + mkdir -p /sys/fs/cgroup/memory//kubelet
	2022-06-02T18:01:29.268473700Z  + '[' /sys/fs/cgroup/memory == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.268486955Z  + mount --bind /sys/fs/cgroup/memory//kubelet /sys/fs/cgroup/memory//kubelet
	2022-06-02T18:01:29.269825883Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/memory
	2022-06-02T18:01:29.269836824Z  + local cgroup_root=/kubelet.slice
	2022-06-02T18:01:29.269840246Z  + local subsystem=/sys/fs/cgroup/memory
	2022-06-02T18:01:29.269843732Z  + '[' -z /kubelet.slice ']'
	2022-06-02T18:01:29.269847057Z  + mkdir -p /sys/fs/cgroup/memory//kubelet.slice
	2022-06-02T18:01:29.270851481Z  + '[' /sys/fs/cgroup/memory == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.270866091Z  + mount --bind /sys/fs/cgroup/memory//kubelet.slice /sys/fs/cgroup/memory//kubelet.slice
	2022-06-02T18:01:29.272059743Z  + IFS=
	2022-06-02T18:01:29.272076780Z  + read -r subsystem
	2022-06-02T18:01:29.272081903Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/net_cls,net_prio
	2022-06-02T18:01:29.272085760Z  + local cgroup_root=/kubelet
	2022-06-02T18:01:29.272089276Z  + local subsystem=/sys/fs/cgroup/net_cls,net_prio
	2022-06-02T18:01:29.272092643Z  + '[' -z /kubelet ']'
	2022-06-02T18:01:29.272096041Z  + mkdir -p /sys/fs/cgroup/net_cls,net_prio//kubelet
	2022-06-02T18:01:29.273172225Z  + '[' /sys/fs/cgroup/net_cls,net_prio == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.273186591Z  + mount --bind /sys/fs/cgroup/net_cls,net_prio//kubelet /sys/fs/cgroup/net_cls,net_prio//kubelet
	2022-06-02T18:01:29.274436179Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/net_cls,net_prio
	2022-06-02T18:01:29.274450315Z  + local cgroup_root=/kubelet.slice
	2022-06-02T18:01:29.274454109Z  + local subsystem=/sys/fs/cgroup/net_cls,net_prio
	2022-06-02T18:01:29.274457360Z  + '[' -z /kubelet.slice ']'
	2022-06-02T18:01:29.274460871Z  + mkdir -p /sys/fs/cgroup/net_cls,net_prio//kubelet.slice
	2022-06-02T18:01:29.275423077Z  + '[' /sys/fs/cgroup/net_cls,net_prio == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.275438015Z  + mount --bind /sys/fs/cgroup/net_cls,net_prio//kubelet.slice /sys/fs/cgroup/net_cls,net_prio//kubelet.slice
	2022-06-02T18:01:29.276707270Z  + IFS=
	2022-06-02T18:01:29.276735605Z  + read -r subsystem
	2022-06-02T18:01:29.276740667Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/freezer
	2022-06-02T18:01:29.276744095Z  + local cgroup_root=/kubelet
	2022-06-02T18:01:29.276747512Z  + local subsystem=/sys/fs/cgroup/freezer
	2022-06-02T18:01:29.276751042Z  + '[' -z /kubelet ']'
	2022-06-02T18:01:29.276754760Z  + mkdir -p /sys/fs/cgroup/freezer//kubelet
	2022-06-02T18:01:29.277980282Z  + '[' /sys/fs/cgroup/freezer == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.277995548Z  + mount --bind /sys/fs/cgroup/freezer//kubelet /sys/fs/cgroup/freezer//kubelet
	2022-06-02T18:01:29.279171123Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/freezer
	2022-06-02T18:01:29.279185525Z  + local cgroup_root=/kubelet.slice
	2022-06-02T18:01:29.279189626Z  + local subsystem=/sys/fs/cgroup/freezer
	2022-06-02T18:01:29.279193536Z  + '[' -z /kubelet.slice ']'
	2022-06-02T18:01:29.279197005Z  + mkdir -p /sys/fs/cgroup/freezer//kubelet.slice
	2022-06-02T18:01:29.280533530Z  + '[' /sys/fs/cgroup/freezer == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.280545457Z  + mount --bind /sys/fs/cgroup/freezer//kubelet.slice /sys/fs/cgroup/freezer//kubelet.slice
	2022-06-02T18:01:29.281822406Z  + IFS=
	2022-06-02T18:01:29.281838306Z  + read -r subsystem
	2022-06-02T18:01:29.281842894Z  + mount_kubelet_cgroup_root /kubelet /sys/fs/cgroup/perf_event
	2022-06-02T18:01:29.281846506Z  + local cgroup_root=/kubelet
	2022-06-02T18:01:29.281849608Z  + local subsystem=/sys/fs/cgroup/perf_event
	2022-06-02T18:01:29.281855327Z  + '[' -z /kubelet ']'
	2022-06-02T18:01:29.281858791Z  + mkdir -p /sys/fs/cgroup/perf_event//kubelet
	2022-06-02T18:01:29.283032331Z  + '[' /sys/fs/cgroup/perf_event == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.283048168Z  + mount --bind /sys/fs/cgroup/perf_event//kubelet /sys/fs/cgroup/perf_event//kubelet
	2022-06-02T18:01:29.284184484Z  + mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/perf_event
	2022-06-02T18:01:29.284197566Z  + local cgroup_root=/kubelet.slice
	2022-06-02T18:01:29.284201459Z  + local subsystem=/sys/fs/cgroup/perf_event
	2022-06-02T18:01:29.284205031Z  + '[' -z /kubelet.slice ']'
	2022-06-02T18:01:29.284208643Z  + mkdir -p /sys/fs/cgroup/perf_event//kubelet.slice
	2022-06-02T18:01:29.285270234Z  + '[' /sys/fs/cgroup/perf_event == /sys/fs/cgroup/cpuset ']'
	2022-06-02T18:01:29.285283788Z  + mount --bind /sys/fs/cgroup/perf_event//kubelet.slice /sys/fs/cgroup/perf_event//kubelet.slice
	2022-06-02T18:01:29.286511300Z  + IFS=
	2022-06-02T18:01:29.286525835Z  + read -r subsystem
	2022-06-02T18:01:29.286827738Z  + return
	2022-06-02T18:01:29.286841924Z  + fix_machine_id
	2022-06-02T18:01:29.286845970Z  + echo 'INFO: clearing and regenerating /etc/machine-id'
	2022-06-02T18:01:29.286850027Z  INFO: clearing and regenerating /etc/machine-id
	2022-06-02T18:01:29.286855018Z  + rm -f /etc/machine-id
	2022-06-02T18:01:29.287852401Z  + systemd-machine-id-setup
	2022-06-02T18:01:29.291555583Z  Initializing machine ID from D-Bus machine ID.
	2022-06-02T18:01:29.294608659Z  + fix_product_name
	2022-06-02T18:01:29.294646421Z  + [[ -f /sys/class/dmi/id/product_name ]]
	2022-06-02T18:01:29.294651254Z  + echo 'INFO: faking /sys/class/dmi/id/product_name to be "kind"'
	2022-06-02T18:01:29.294658461Z  INFO: faking /sys/class/dmi/id/product_name to be "kind"
	2022-06-02T18:01:29.294662429Z  + echo kind
	2022-06-02T18:01:29.294848199Z  + mount -o ro,bind /kind/product_name /sys/class/dmi/id/product_name
	2022-06-02T18:01:29.296489128Z  + fix_product_uuid
	2022-06-02T18:01:29.296501596Z  + [[ ! -f /kind/product_uuid ]]
	2022-06-02T18:01:29.296504132Z  + cat /proc/sys/kernel/random/uuid
	2022-06-02T18:01:29.297584756Z  + [[ -f /sys/class/dmi/id/product_uuid ]]
	2022-06-02T18:01:29.297604288Z  + echo 'INFO: faking /sys/class/dmi/id/product_uuid to be random'
	2022-06-02T18:01:29.297608023Z  INFO: faking /sys/class/dmi/id/product_uuid to be random
	2022-06-02T18:01:29.297610883Z  + mount -o ro,bind /kind/product_uuid /sys/class/dmi/id/product_uuid
	2022-06-02T18:01:29.298927791Z  + [[ -f /sys/devices/virtual/dmi/id/product_uuid ]]
	2022-06-02T18:01:29.298939874Z  + echo 'INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well'
	2022-06-02T18:01:29.298942624Z  INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
	2022-06-02T18:01:29.298944832Z  + mount -o ro,bind /kind/product_uuid /sys/devices/virtual/dmi/id/product_uuid
	2022-06-02T18:01:29.300500332Z  + select_iptables
	2022-06-02T18:01:29.300515241Z  + local mode num_legacy_lines num_nft_lines
	2022-06-02T18:01:29.301436075Z  ++ grep -c '^-'
	2022-06-02T18:01:29.305171416Z  + num_legacy_lines=6
	2022-06-02T18:01:29.305954579Z  ++ grep -c '^-'
	2022-06-02T18:01:29.310092390Z  ++ true
	2022-06-02T18:01:29.310273248Z  + num_nft_lines=0
	2022-06-02T18:01:29.310283889Z  + '[' 6 -ge 0 ']'
	2022-06-02T18:01:29.310355517Z  + mode=legacy
	2022-06-02T18:01:29.310373925Z  + echo 'INFO: setting iptables to detected mode: legacy'
	2022-06-02T18:01:29.310378405Z  INFO: setting iptables to detected mode: legacy
	2022-06-02T18:01:29.310382032Z  + update-alternatives --set iptables /usr/sbin/iptables-legacy
	2022-06-02T18:01:29.310414391Z  + echo 'retryable update-alternatives: --set iptables /usr/sbin/iptables-legacy'
	2022-06-02T18:01:29.310428412Z  + local 'args=--set iptables /usr/sbin/iptables-legacy'
	2022-06-02T18:01:29.310924243Z  ++ seq 0 15
	2022-06-02T18:01:29.311528431Z  + for i in $(seq 0 15)
	2022-06-02T18:01:29.311544440Z  + /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-legacy
	2022-06-02T18:01:29.314951921Z  + return
	2022-06-02T18:01:29.314971640Z  + update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
	2022-06-02T18:01:29.315112840Z  + echo 'retryable update-alternatives: --set ip6tables /usr/sbin/ip6tables-legacy'
	2022-06-02T18:01:29.315133627Z  + local 'args=--set ip6tables /usr/sbin/ip6tables-legacy'
	2022-06-02T18:01:29.315508413Z  ++ seq 0 15
	2022-06-02T18:01:29.316171506Z  + for i in $(seq 0 15)
	2022-06-02T18:01:29.316180783Z  + /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
	2022-06-02T18:01:29.319231649Z  + return
	2022-06-02T18:01:29.319252497Z  + enable_network_magic
	2022-06-02T18:01:29.319261361Z  + local docker_embedded_dns_ip=127.0.0.11
	2022-06-02T18:01:29.319265285Z  + local docker_host_ip
	2022-06-02T18:01:29.320449615Z  ++ cut '-d ' -f1
	2022-06-02T18:01:29.320601990Z  ++ head -n1 /dev/fd/63
	2022-06-02T18:01:29.320676505Z  +++ getent ahostsv4 host.docker.internal
	2022-06-02T18:01:29.337394062Z  + docker_host_ip=
	2022-06-02T18:01:29.337422978Z  + [[ -z '' ]]
	2022-06-02T18:01:29.338119728Z  ++ ip -4 route show default
	2022-06-02T18:01:29.338213560Z  ++ cut '-d ' -f3
	2022-06-02T18:01:29.339783648Z  + docker_host_ip=192.168.67.1
	2022-06-02T18:01:29.340089618Z  + iptables-save
	2022-06-02T18:01:29.341240737Z  + iptables-restore
	2022-06-02T18:01:29.341915326Z  + sed -e 's/-d 127.0.0.11/-d 192.168.67.1/g' -e 's/-A OUTPUT \(.*\) -j DOCKER_OUTPUT/\0\n-A PREROUTING \1 -j DOCKER_OUTPUT/' -e 's/--to-source :53/--to-source 192.168.67.1:53/g'
	2022-06-02T18:01:29.344938104Z  + cp /etc/resolv.conf /etc/resolv.conf.original
	2022-06-02T18:01:29.346371079Z  + sed -e s/127.0.0.11/192.168.67.1/g /etc/resolv.conf.original
	2022-06-02T18:01:29.348833391Z  ++ cut '-d ' -f1
	2022-06-02T18:01:29.348982790Z  ++ head -n1 /dev/fd/63
	2022-06-02T18:01:29.349573906Z  ++++ hostname
	2022-06-02T18:01:29.350243315Z  +++ getent ahostsv4 default-k8s-different-port-20220602180121-283122
	2022-06-02T18:01:29.352149497Z  + curr_ipv4=192.168.67.2
	2022-06-02T18:01:29.352164941Z  + echo 'INFO: Detected IPv4 address: 192.168.67.2'
	2022-06-02T18:01:29.352169101Z  INFO: Detected IPv4 address: 192.168.67.2
	2022-06-02T18:01:29.352172947Z  + '[' -f /kind/old-ipv4 ']'
	2022-06-02T18:01:29.352218828Z  + [[ -n 192.168.67.2 ]]
	2022-06-02T18:01:29.352232732Z  + echo -n 192.168.67.2
	2022-06-02T18:01:29.353484475Z  ++ cut '-d ' -f1
	2022-06-02T18:01:29.353500787Z  ++ head -n1 /dev/fd/63
	2022-06-02T18:01:29.354044710Z  ++++ hostname
	2022-06-02T18:01:29.354661546Z  +++ getent ahostsv6 default-k8s-different-port-20220602180121-283122
	2022-06-02T18:01:29.356229784Z  + curr_ipv6=
	2022-06-02T18:01:29.356243241Z  + echo 'INFO: Detected IPv6 address: '
	2022-06-02T18:01:29.356247285Z  INFO: Detected IPv6 address: 
	2022-06-02T18:01:29.356264408Z  + '[' -f /kind/old-ipv6 ']'
	2022-06-02T18:01:29.356269624Z  + [[ -n '' ]]
	2022-06-02T18:01:29.356725003Z  ++ uname -a
	2022-06-02T18:01:29.357466503Z  + echo 'entrypoint completed: Linux default-k8s-different-port-20220602180121-283122 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux'
	2022-06-02T18:01:29.357481318Z  entrypoint completed: Linux default-k8s-different-port-20220602180121-283122 5.13.0-1027-gcp #32~20.04.1-Ubuntu SMP Thu May 26 10:53:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	2022-06-02T18:01:29.357485435Z  + exec /sbin/init
	2022-06-02T18:01:29.363773075Z  systemd 245.4-4ubuntu3.17 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
	2022-06-02T18:01:29.363790846Z  Detected virtualization docker.
	2022-06-02T18:01:29.363793756Z  Detected architecture x86-64.
	2022-06-02T18:01:29.364127978Z  
	2022-06-02T18:01:29.364144348Z  Welcome to Ubuntu 20.04.4 LTS!
	2022-06-02T18:01:29.364148832Z  
	2022-06-02T18:01:29.364152189Z  Set hostname to <default-k8s-different-port-20220602180121-283122>.
	2022-06-02T18:01:29.406434747Z  [  OK  ] Started Dispatch Password …ts to Console Directory Watch.
	2022-06-02T18:01:29.406641471Z  [  OK  ] Set up automount Arbitrary…s File System Automount Point.
	2022-06-02T18:01:29.406660762Z  [  OK  ] Reached target Local Encrypted Volumes.
	2022-06-02T18:01:29.406665847Z  [  OK  ] Reached target Network is Online.
	2022-06-02T18:01:29.406669846Z  [  OK  ] Reached target Paths.
	2022-06-02T18:01:29.406696304Z  [  OK  ] Reached target Slices.
	2022-06-02T18:01:29.406705400Z  [  OK  ] Reached target Swap.
	2022-06-02T18:01:29.406949554Z  [  OK  ] Listening on Journal Audit Socket.
	2022-06-02T18:01:29.407034757Z  [  OK  ] Listening on Journal Socket (/dev/log).
	2022-06-02T18:01:29.407136645Z  [  OK  ] Listening on Journal Socket.
	2022-06-02T18:01:29.408824571Z           Mounting Huge Pages File System...
	2022-06-02T18:01:29.410258698Z           Mounting Kernel Debug File System...
	2022-06-02T18:01:29.411718845Z           Mounting Kernel Trace File System...
	2022-06-02T18:01:29.413650528Z           Starting Journal Service...
	2022-06-02T18:01:29.415619100Z           Starting Create list of st…odes for the current kernel...
	2022-06-02T18:01:29.417912149Z           Mounting FUSE Control File System...
	2022-06-02T18:01:29.418891040Z           Starting Remount Root and Kernel File Systems...
	2022-06-02T18:01:29.420257553Z           Starting Apply Kernel Variables...
	2022-06-02T18:01:29.422897568Z  [  OK  ] Mounted Huge Pages File System.
	2022-06-02T18:01:29.422917196Z  [  OK  ] Mounted Kernel Debug File System.
	2022-06-02T18:01:29.422921656Z  [  OK  ] Mounted Kernel Trace File System.
	2022-06-02T18:01:29.423359417Z  [  OK  ] Finished Create list of st… nodes for the current kernel.
	2022-06-02T18:01:29.423591575Z  [  OK  ] Mounted FUSE Control File System.
	2022-06-02T18:01:29.426323728Z  [  OK  ] Finished Remount Root and Kernel File Systems.
	2022-06-02T18:01:29.428040044Z           Starting Create System Users...
	2022-06-02T18:01:29.434330773Z           Starting Update UTMP about System Boot/Shutdown...
	2022-06-02T18:01:29.435174793Z  [  OK  ] Finished Apply Kernel Variables.
	2022-06-02T18:01:29.442360738Z  [  OK  ] Finished Update UTMP about System Boot/Shutdown.
	2022-06-02T18:01:29.447907100Z  [  OK  ] Started Journal Service.
	2022-06-02T18:01:29.450095964Z           Starting Flush Journal to Persistent Storage...
	2022-06-02T18:01:29.456675811Z  [  OK  ] Finished Create System Users.
	2022-06-02T18:01:29.457417394Z  [  OK  ] Finished Flush Journal to Persistent Storage.
	2022-06-02T18:01:29.459836512Z           Starting Create Static Device Nodes in /dev...
	2022-06-02T18:01:29.466591212Z  [  OK  ] Finished Create Static Device Nodes in /dev.
	2022-06-02T18:01:29.466671877Z  [  OK  ] Reached target Local File Systems (Pre).
	2022-06-02T18:01:29.466691063Z  [  OK  ] Reached target Local File Systems.
	2022-06-02T18:01:29.466950987Z  [  OK  ] Reached target System Initialization.
	2022-06-02T18:01:29.466967401Z  [  OK  ] Started Daily Cleanup of Temporary Directories.
	2022-06-02T18:01:29.466973806Z  [  OK  ] Reached target Timers.
	2022-06-02T18:01:29.467169244Z  [  OK  ] Listening on BuildKit.
	2022-06-02T18:01:29.467311174Z  [  OK  ] Listening on D-Bus System Message Bus Socket.
	2022-06-02T18:01:29.468453122Z           Starting Docker Socket for the API.
	2022-06-02T18:01:29.472377315Z           Starting Podman API Socket.
	2022-06-02T18:01:29.472751836Z  [  OK  ] Listening on Docker Socket for the API.
	2022-06-02T18:01:29.473823525Z  [  OK  ] Listening on Podman API Socket.
	2022-06-02T18:01:29.473842010Z  [  OK  ] Reached target Sockets.
	2022-06-02T18:01:29.473867362Z  [  OK  ] Reached target Basic System.
	2022-06-02T18:01:29.475188197Z           Starting containerd container runtime...
	2022-06-02T18:01:29.476453906Z  [  OK  ] Started D-Bus System Message Bus.
	2022-06-02T18:01:29.479141940Z           Starting minikube automount...
	2022-06-02T18:01:29.480481559Z           Starting OpenBSD Secure Shell server...
	2022-06-02T18:01:29.497667002Z  [  OK  ] Finished minikube automount.
	2022-06-02T18:01:29.501375783Z  [  OK  ] Started OpenBSD Secure Shell server.
	2022-06-02T18:01:29.543661372Z  [  OK  ] Started containerd container runtime.
	2022-06-02T18:01:29.544900658Z           Starting Docker Application Container Engine...
	2022-06-02T18:01:29.785421600Z  [  OK  ] Started Docker Application Container Engine.
	2022-06-02T18:01:29.785487356Z  [  OK  ] Reached target Multi-User System.
	2022-06-02T18:01:29.785510276Z  [  OK  ] Reached target Graphical Interface.
	2022-06-02T18:01:29.786939655Z           Starting Update UTMP about System Runlevel Changes...
	2022-06-02T18:01:29.794756035Z  [  OK  ] Finished Update UTMP about System Runlevel Changes.
	2022-06-02T18:06:20.422356963Z  [  OK  ] Stopped target Graphical Interface.
	2022-06-02T18:06:20.422436018Z  [  OK  ] Stopped target Multi-User System.
	2022-06-02T18:06:20.422581433Z  [  OK  ] Stopped target Timers.
	2022-06-02T18:06:20.422881739Z  [  OK  ] Stopped Daily Cleanup of Temporary Directories.
	2022-06-02T18:06:20.424712617Z           Stopping D-Bus System Message Bus...
	2022-06-02T18:06:20.424899701Z           Stopping Docker Application Container Engine...
	2022-06-02T18:06:20.426799339Z           Stopping kubelet: The Kubernetes Node Agent...
	2022-06-02T18:06:20.426834962Z           Stopping OpenBSD Secure Shell server...
	2022-06-02T18:06:20.426840665Z  [  OK  ] Stopped D-Bus System Message Bus.
	2022-06-02T18:06:20.427435680Z  [  OK  ] Stopped OpenBSD Secure Shell server.
	2022-06-02T18:06:20.535737875Z  [  OK  ] Stopped kubelet: The Kubernetes Node Agent.
	2022-06-02T18:06:20.738910224Z  [  OK  ] Unmounted /var/lib/docker/…44c6c8055055cd80e0/mounts/shm.
	2022-06-02T18:06:20.741323985Z  [  OK  ] Unmounted /var/lib/docker/…b8faa30b1f595643c440e9/merged.
	2022-06-02T18:06:20.753364496Z  [  OK  ] Unmounted /var/lib/docker/…8aed59381fff95b04e7777/merged.
	2022-06-02T18:06:20.755108447Z  [  OK  ] Unmounted /var/lib/docker/…42d8f1b09dd2a872c64709/merged.
	2022-06-02T18:06:20.758866825Z  [  OK  ] Unmounted /var/lib/docker/…85308d510fed937196/mounts/shm.
	2022-06-02T18:06:20.760968263Z  [  OK  ] Unmounted /var/lib/docker/…a81b7a812f08b35335cfa3/merged.
	2022-06-02T18:06:20.763389079Z  [  OK  ] Unmounted /var/lib/docker/…fdf56ec4b7478274397c68/merged.
	2022-06-02T18:06:20.764169748Z  [  OK  ] Unmounted /var/lib/docker/…a50b5ef22121446055/mounts/shm.
	2022-06-02T18:06:20.764801720Z  [  OK  ] Unmounted /var/lib/docker/…35410772eb30afd3a8ac91/merged.
	2022-06-02T18:06:20.773238079Z  [  OK  ] Unmounted /var/lib/docker/…b278683c353a72039f/mounts/shm.
	2022-06-02T18:06:20.773876178Z  [  OK  ] Unmounted /var/lib/docker/…71df5f43ba561b663d3da6/merged.
	2022-06-02T18:06:20.775281670Z  [  OK  ] Unmounted /var/lib/docker/…ce27bd58d174a85999/mounts/shm.
	2022-06-02T18:06:20.775398954Z  [  OK  ] Unmounted /var/lib/docker/…5aa5e70ae8f1800238d3bc/merged.
	2022-06-02T18:06:20.779602056Z  [  OK  ] Unmounted /var/lib/docker/…fd00c7bdf5042abfc0/mounts/shm.
	2022-06-02T18:06:20.780076856Z  [  OK  ] Unmounted /var/lib/docker/…fff88df05231ac4c06f76d/merged.
	2022-06-02T18:06:21.006048024Z  [  OK  ] Unmounted /run/docker/netns/074dfeac4343.
	2022-06-02T18:06:21.007284622Z  [  OK  ] Unmounted /var/lib/docker/…3654978d85037ce41c/mounts/shm.
	2022-06-02T18:06:21.007549696Z  [  OK  ] Unmounted /var/lib/docker/…da7d44f7bb96377952adb3/merged.
	2022-06-02T18:06:21.054658948Z  [  OK  ] Unmounted /run/docker/netns/2d5d7e3331b1.
	2022-06-02T18:06:21.055761384Z  [  OK  ] Unmounted /var/lib/docker/…a4e40ca0a789f66b56/mounts/shm.
	2022-06-02T18:06:21.056163782Z  [  OK  ] Unmounted /var/lib/docker/…23c1fd001c4078f9857da7/merged.
	2022-06-02T18:06:21.678856952Z  [  OK  ] Unmounted /var/lib/docker/…3a3581c3529f988f655448/merged.
	2022-06-02T18:06:23.696056241Z  [*     ] A stop job is running for Docker Ap…n Container Engine (1s / 1min 28s)
	2022-06-02T18:06:24.196004370Z  M
[**    ] A stop job is running for Docker Ap…n Container Engine (2s / 1min 28s)
	2022-06-02T18:06:24.695988204Z  M
[***   ] A stop job is running for Docker Ap…n Container Engine (2s / 1min 28s)
	2022-06-02T18:06:25.195980621Z  M
[ ***  ] A stop job is running for Docker Ap…n Container Engine (3s / 1min 28s)
	2022-06-02T18:06:25.555165737Z  M
[  *** ] A stop job is running for Docker Ap…n Container Engine (3s / 1min 28s)
	2022-06-02T18:06:25.567548679Z  M
[  OK  ] Unmounted /var/lib/docker/…f714eb4af933c69dcac60a/merged.
	2022-06-02T18:06:27.695984416Z  [   ***] A stop job is running for Docker Ap…n Container Engine (5s / 1min 28s)
	2022-06-02T18:06:28.196007498Z  M
[    **] A stop job is running for Docker Ap…n Container Engine (6s / 1min 28s)
	2022-06-02T18:06:28.695954860Z  M
[     *] A stop job is running for Docker Ap…n Container Engine (6s / 1min 28s)
	2022-06-02T18:06:29.196153973Z  M
[    **] A stop job is running for Docker Ap…n Container Engine (7s / 1min 28s)
	2022-06-02T18:06:29.695957525Z  M
[   ***] A stop job is running for Docker Ap…n Container Engine (7s / 1min 28s)
	2022-06-02T18:06:30.196068812Z  M
[  *** ] A stop job is running for Docker Ap…n Container Engine (8s / 1min 28s)
	2022-06-02T18:06:30.502837291Z  M
[  OK  ] Unmounted /var/lib/docker/…804411b4c63563124878bc/merged.
	2022-06-02T18:06:30.593157553Z  [  OK  ] Unmounted /var/lib/docker/…d5ebf22a1974098a1d2a4c/merged.
	2022-06-02T18:06:30.627055935Z  [  OK  ] Stopped Docker Application Container Engine.
	2022-06-02T18:06:30.627260550Z  [  OK  ] Stopped target Network is Online.
	2022-06-02T18:06:30.627376840Z           Stopping containerd container runtime...
	2022-06-02T18:06:30.628008576Z  [  OK  ] Stopped minikube automount.
	2022-06-02T18:06:30.637241518Z  [  OK  ] Stopped containerd container runtime.
	2022-06-02T18:06:30.637361948Z  [  OK  ] Stopped target Basic System.
	2022-06-02T18:06:30.637467519Z  [  OK  ] Stopped target Paths.
	2022-06-02T18:06:30.637474749Z  [  OK  ] Stopped target Slices.
	2022-06-02T18:06:30.637522754Z  [  OK  ] Stopped target Sockets.
	2022-06-02T18:06:30.638103459Z  [  OK  ] Closed BuildKit.
	2022-06-02T18:06:30.638653861Z  [  OK  ] Closed D-Bus System Message Bus Socket.
	2022-06-02T18:06:30.639146739Z  [  OK  ] Closed Docker Socket for the API.
	2022-06-02T18:06:30.639655907Z  [  OK  ] Closed Podman API Socket.
	2022-06-02T18:06:30.639671463Z  [  OK  ] Stopped target System Initialization.
	2022-06-02T18:06:30.639737655Z  [  OK  ] Stopped target Local Encrypted Volumes.
	2022-06-02T18:06:30.653393436Z  [  OK  ] Stopped Dispatch Password …ts to Console Directory Watch.
	2022-06-02T18:06:30.653427926Z  [  OK  ] Stopped target Local File Systems.
	2022-06-02T18:06:30.654594492Z           Unmounting /data...
	2022-06-02T18:06:30.655297078Z           Unmounting /etc/hostname...
	2022-06-02T18:06:30.655974410Z           Unmounting /etc/hosts...
	2022-06-02T18:06:30.657268816Z           Unmounting /etc/resolv.conf...
	2022-06-02T18:06:30.657778311Z           Unmounting /kind/product_uuid...
	2022-06-02T18:06:30.658613634Z           Unmounting /run/docker/netns/default...
	2022-06-02T18:06:30.659460762Z           Unmounting /tmp/hostpath-provisioner...
	2022-06-02T18:06:30.660348299Z           Unmounting /tmp/hostpath_pv...
	2022-06-02T18:06:30.663954101Z           Unmounting /usr/lib/modules...
	2022-06-02T18:06:30.665432494Z           Unmounting /var/lib/kubele…ected/kube-api-access-m5qcm...
	2022-06-02T18:06:30.667062533Z           Unmounting /var/lib/kubele…ected/kube-api-access-w79cf...
	2022-06-02T18:06:30.668437958Z           Unmounting /var/lib/kubele…ected/kube-api-access-9546w...
	2022-06-02T18:06:30.670016444Z           Unmounting /var/lib/kubele…ected/kube-api-access-z7xs9...
	2022-06-02T18:06:30.671308854Z           Unmounting /var/lib/kubele…ected/kube-api-access-w72rg...
	2022-06-02T18:06:30.672192911Z  [  OK  ] Stopped Apply Kernel Variables.
	2022-06-02T18:06:30.673113612Z           Stopping Update UTMP about System Boot/Shutdown...
	2022-06-02T18:06:30.676820270Z  [  OK  ] Unmounted /data.
	2022-06-02T18:06:30.677931136Z  [  OK  ] Unmounted /etc/hostname.
	2022-06-02T18:06:30.678465453Z  [  OK  ] Unmounted /etc/hosts.
	2022-06-02T18:06:30.679207476Z  [  OK  ] Unmounted /etc/resolv.conf.
	2022-06-02T18:06:30.679900974Z  [  OK  ] Unmounted /kind/product_uuid.
	2022-06-02T18:06:30.680511954Z  [  OK  ] Unmounted /run/docker/netns/default.
	2022-06-02T18:06:30.681343826Z  [  OK  ] Unmounted /tmp/hostpath-provisioner.
	2022-06-02T18:06:30.682074115Z  [  OK  ] Unmounted /tmp/hostpath_pv.
	2022-06-02T18:06:30.682806287Z  [  OK  ] Unmounted /usr/lib/modules.
	2022-06-02T18:06:30.683409074Z  [  OK  ] Unmounted /var/lib/kubelet…ojected/kube-api-access-m5qcm.
	2022-06-02T18:06:30.684037977Z  [  OK  ] Unmounted /var/lib/kubelet…ojected/kube-api-access-w79cf.
	2022-06-02T18:06:30.684711134Z  [  OK  ] Unmounted /var/lib/kubelet…ojected/kube-api-access-9546w.
	2022-06-02T18:06:30.685460185Z  [  OK  ] Unmounted /var/lib/kubelet…ojected/kube-api-access-z7xs9.
	2022-06-02T18:06:30.686047211Z  [  OK  ] Unmounted /var/lib/kubelet…ojected/kube-api-access-w72rg.
	2022-06-02T18:06:30.688161204Z           Unmounting /tmp...
	2022-06-02T18:06:30.688957268Z  [  OK  ] Stopped Update UTMP about System Boot/Shutdown.
	2022-06-02T18:06:30.691118932Z           Unmounting /var...
	2022-06-02T18:06:30.693571544Z  [  OK  ] Unmounted /tmp.
	2022-06-02T18:06:30.693708667Z  [  OK  ] Stopped target Swap.
	2022-06-02T18:06:30.695713687Z  [  OK  ] Unmounted /var.
	2022-06-02T18:06:30.695852243Z  [  OK  ] Stopped target Local File Systems (Pre).
	2022-06-02T18:06:30.695870853Z  [  OK  ] Reached target Unmount All Filesystems.
	2022-06-02T18:06:30.696686660Z  [  OK  ] Stopped Create Static Device Nodes in /dev.
	2022-06-02T18:06:30.698847273Z  [  OK  ] Stopped Create System Users.
	2022-06-02T18:06:30.699433883Z  [  OK  ] Stopped Remount Root and Kernel File Systems.
	2022-06-02T18:06:30.699504247Z  [  OK  ] Reached target Shutdown.
	2022-06-02T18:06:30.699514147Z  [  OK  ] Reached target Final Step.
	2022-06-02T18:06:30.700839956Z           Starting Halt...
	2022-06-02T18:06:30.701154221Z  [  OK  ] Finished Power-Off.
	2022-06-02T18:06:30.701291041Z  [  OK  ] Reached target Power-Off.
	
	-- /stdout --
	I0602 18:06:40.985892  569614 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 18:06:41.111888  569614 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:71 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-02 18:06:41.019498828 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 18:06:41.111980  569614 errors.go:98] postmortem docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:71 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-02 18:06:41.019498828 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 18:06:41.112059  569614 network_create.go:272] running [docker network inspect default-k8s-different-port-20220602180121-283122] to gather additional debugging logs...
	I0602 18:06:41.112082  569614 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220602180121-283122
	W0602 18:06:41.149253  569614 cli_runner.go:211] docker network inspect default-k8s-different-port-20220602180121-283122 returned with exit code 1
	I0602 18:06:41.149291  569614 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220602180121-283122]: docker network inspect default-k8s-different-port-20220602180121-283122: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220602180121-283122
	I0602 18:06:41.149311  569614 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220602180121-283122]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220602180121-283122
	
	** /stderr **
	I0602 18:06:41.149431  569614 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 18:06:41.268979  569614 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:71 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-02 18:06:41.184968385 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 18:06:41.269472  569614 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220602180121-283122
	I0602 18:06:41.310975  569614 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602180121-283122/config.json ...
	I0602 18:06:41.311238  569614 machine.go:88] provisioning docker machine ...
	I0602 18:06:41.311273  569614 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220602180121-283122"
	I0602 18:06:41.311331  569614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122
	W0602 18:06:41.349922  569614 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122 returned with exit code 1
	I0602 18:06:41.350001  569614 machine.go:91] provisioned docker machine in 38.743956ms
	I0602 18:06:41.350072  569614 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 18:06:41.350143  569614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122
	W0602 18:06:41.394586  569614 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122 returned with exit code 1
	I0602 18:06:41.394724  569614 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 18:06:41.595345  569614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122
	W0602 18:06:41.629698  569614 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122 returned with exit code 1
	I0602 18:06:41.629829  569614 retry.go:31] will retry after 380.704736ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 18:06:42.011491  569614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122
	W0602 18:06:42.047186  569614 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122 returned with exit code 1
	I0602 18:06:42.047317  569614 retry.go:31] will retry after 738.922478ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 18:06:42.787373  569614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122
	W0602 18:06:42.821059  569614 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122 returned with exit code 1
	W0602 18:06:42.821175  569614 start.go:281] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0602 18:06:42.821192  569614 start.go:248] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 18:06:42.821236  569614 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 18:06:42.821269  569614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122
	W0602 18:06:42.856270  569614 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122 returned with exit code 1
	I0602 18:06:42.856426  569614 retry.go:31] will retry after 220.164297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 18:06:43.076885  569614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122
	W0602 18:06:43.115953  569614 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122 returned with exit code 1
	I0602 18:06:43.116076  569614 retry.go:31] will retry after 306.771815ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 18:06:43.423671  569614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122
	W0602 18:06:43.456064  569614 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122 returned with exit code 1
	I0602 18:06:43.456220  569614 retry.go:31] will retry after 545.000538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 18:06:44.002052  569614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122
	W0602 18:06:44.037695  569614 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122 returned with exit code 1
	I0602 18:06:44.037835  569614 retry.go:31] will retry after 660.685065ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 18:06:44.698675  569614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122
	W0602 18:06:44.732387  569614 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220602180121-283122 returned with exit code 1
	W0602 18:06:44.732520  569614 start.go:296] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0602 18:06:44.732546  569614 start.go:253] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 18:06:44.732561  569614 fix.go:57] fixHost completed within 3.910164215s
	I0602 18:06:44.732576  569614 start.go:81] releasing machines lock for "default-k8s-different-port-20220602180121-283122", held for 3.910198112s
	W0602 18:06:44.732802  569614 out.go:239] * Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20220602180121-283122" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	* Failed to start docker container. Running "minikube delete -p default-k8s-different-port-20220602180121-283122" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0602 18:06:44.735426  569614 out.go:177] 
	W0602 18:06:44.737037  569614 out.go:239] X Exiting due to GUEST_PROVISION_CONTAINER_EXITED: Docker container exited prematurely after it was created, consider investigating Docker's performance/health.
	X Exiting due to GUEST_PROVISION_CONTAINER_EXITED: Docker container exited prematurely after it was created, consider investigating Docker's performance/health.
	I0602 18:06:44.738866  569614 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:261: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p default-k8s-different-port-20220602180121-283122 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220602180121-283122
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220602180121-283122:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503",
	        "Created": "2022-06-02T18:01:28.65338102Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "network c9942ce21648740d1266dd81f85155715256e5c54e7d23ee69de43962d50f431 not found",
	            "StartedAt": "2022-06-02T18:01:29.039934789Z",
	            "FinishedAt": "2022-06-02T18:06:30.797930837Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/hostname",
	        "HostsPath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/hosts",
	        "LogPath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503-json.log",
	        "Name": "/default-k8s-different-port-20220602180121-283122",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220602180121-283122:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220602180121-283122",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48-init/diff:/var/lib/docker/overlay2/9517781e97ae29bbe1cf38381a601e806342524da1c5d5dc15ba54ec17f204b6/diff:/var/lib/docker/overlay2/28e14d8724bd82b9b6adde55b66e6d1f1be4fa6c3379f2f44ca1138dd648175a/diff:/var/lib/docker/overlay2/24589dfb9ddc941e7cb22b5e38dec69d1af0a29a330d344152ca7f26010354a8/diff:/var/lib/docker/overlay2/66c45d6273a3507b6e0b6b0753ec4fc82f07160fd115626e24a42bda27bbba86/diff:/var/lib/docker/overlay2/a6f0c661bd178a2a0fef034493a114067e8952b8516e69aff1e1e94a75918bbc/diff:/var/lib/docker/overlay2/5b40a045506372be84bfb01d05b068bc4cc926c5eb9e868226c4c386e8a331c7/diff:/var/lib/docker/overlay2/dbf9a9c6e59628984c95ea93ff4ada66c53b22ca3c2f910f70efba895cf40718/diff:/var/lib/docker/overlay2/5eea619393ee989a35a37a13b0633685af91729e1b74f6e73fd94b63c9d7d1fa/diff:/var/lib/docker/overlay2/24de6fbece5386c3658fa3e2c01723e6c1c24fb837d53b3d5a8c75b64ce2cd06/diff:/var/lib/docker/overlay2/04593e
f7d3bf72ee8b7439b0326af5b9818930e0dd48c35dba97e3b4ae39f4ee/diff:/var/lib/docker/overlay2/09310697cecb557085bb4d5dfc810a82fc533759ad98179263b2fc94153a64fa/diff:/var/lib/docker/overlay2/ee75c420f0615a19a5d44cfe042bdca5a2cf0f1c541168ba424140cf2aa7f98d/diff:/var/lib/docker/overlay2/fc8643a3e2060ab0365cd1227ab9ebbd5a573e6ebf8379bc6b64a756a1f66562/diff:/var/lib/docker/overlay2/0f8571cab87b35114fab495cbde9af8ac7e58be99412da82a79c90239e2227e1/diff:/var/lib/docker/overlay2/e1a5ae079e4f566081e5e786955257c23809c8719ea6b4593c8c91ac00380379/diff:/var/lib/docker/overlay2/373565e63dec12c85906e80b4a030f7d852992a756749be9424ac17802d589c0/diff:/var/lib/docker/overlay2/b886b42aa5e9f06b4381a6f58d55338774a2d02af2e91e3de156aa4bca4e4be0/diff:/var/lib/docker/overlay2/dee534a744ce54d08ca752b7f64e1772c063d4ba1e008c5a9773188bb009c377/diff:/var/lib/docker/overlay2/c8cf6667f29ffc5417675e4bd8eee6a91468de13292e3cb6ae2c70d3ad56d5fd/diff:/var/lib/docker/overlay2/072b57a4228c3928bdb27162809b102b485bb94c5f726c9a3e9892267ed30839/diff:/var/lib/d
ocker/overlay2/491d5b51bf6de1cfc4eef288c454f54e3b424e3b83fd9dc216c35891c7923f55/diff:/var/lib/docker/overlay2/78f167c2cc286e4058ef09d5a719437afddde854aac41b8d054567865fa61024/diff:/var/lib/docker/overlay2/10763b3ba26983ed9426ccca08bfceb7037d95c69827847e2ff755b4f2c5a4ab/diff:/var/lib/docker/overlay2/e559098efb21164d96bd0aee50fee9f48310df68887c5c884f4ba3f3fc895260/diff:/var/lib/docker/overlay2/555ef1505fe2b1ab4244f073e9212f269d70d8cd6ca0cd517d21cc80cd25b4c1/diff:/var/lib/docker/overlay2/0201da7c907a1f8f564cda48c3f3a403e484d901739d2584f1343a7907685c5e/diff:/var/lib/docker/overlay2/0abae5d84f8497a5114349421415431c43a34015ad98f6c9c3140143d2f1f383/diff:/var/lib/docker/overlay2/80c37982eef2dc00bd08f208551df377570134af84008156f6a500855fd2ba10/diff:/var/lib/docker/overlay2/cc38481bb11be97cd54c0032709337a491580c2f0ae6d3f7eec4c3616f8e3126/diff:/var/lib/docker/overlay2/dda978aeb4c7317bafbc077229c4417a4d8e7658d9604803c401448b471eab5e/diff:/var/lib/docker/overlay2/7c230ae0970689e2932cdaadc9e3f86ec8215fbf384c9eb59cfb958daf7
3197e/diff:/var/lib/docker/overlay2/145db51b169211e46c3bbab5ca998b2a019a4a813801acc2dae9b2dc09bc2ecc/diff:/var/lib/docker/overlay2/e604163672a013549c59bc042d1b932a3dad9e104d7079fbbb3ca24c29351c0d/diff:/var/lib/docker/overlay2/5a304d8680fd3fb594754173a70646923e2715ed21c98b8c43e1f1ef2d7abd6a/diff:/var/lib/docker/overlay2/b77f00351ddd70be2f3d9dd622b78e1a2937c4ba71585b357fef88d285eb762e/diff:/var/lib/docker/overlay2/196b6ae042b9b08748253c16f5e2b56d7f9a25077008d3d447cbc75447ed5e2a/diff:/var/lib/docker/overlay2/8c7c3fbb0bed8f7440d11104257806c70d16fd7d432311fc208d5012a439e5a1/diff:/var/lib/docker/overlay2/10604aa3b8ed5698c9fa8f55e00b9a9dc27d6b99135255d6c756fc080e615fc6/diff:/var/lib/docker/overlay2/c6602a17e9b0ab1d84297109204b48e60f293952fbc1feaf362895ec86860ead/diff:/var/lib/docker/overlay2/3828c5e50db9be2b4bc30fce040392ea735da5eaa819ad0848cb4fb79fc367da/diff:/var/lib/docker/overlay2/708816f20892c0e4b94cbe8e80b169fff54ffd870bcde395bd8163dd03b25d0f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220602180121-283122",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220602180121-283122/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220602180121-283122",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220602180121-283122",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220602180121-283122",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3989e814c0ac91c6e28c041e22d15c3ab28ed2a1bbec4bcc9b0a154e6c83dc06",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/3989e814c0ac",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220602180121-283122": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c65299b1d424",
	                        "default-k8s-different-port-20220602180121-283122"
	                    ],
	                    "NetworkID": "c9942ce21648740d1266dd81f85155715256e5c54e7d23ee69de43962d50f431",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220602180121-283122 -n default-k8s-different-port-20220602180121-283122
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220602180121-283122 -n default-k8s-different-port-20220602180121-283122: exit status 7 (110.328236ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220602180121-283122" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/SecondStart (13.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:277: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-different-port-20220602180121-283122" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220602180121-283122
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220602180121-283122:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503",
	        "Created": "2022-06-02T18:01:28.65338102Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "network c9942ce21648740d1266dd81f85155715256e5c54e7d23ee69de43962d50f431 not found",
	            "StartedAt": "2022-06-02T18:01:29.039934789Z",
	            "FinishedAt": "2022-06-02T18:06:30.797930837Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/hostname",
	        "HostsPath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/hosts",
	        "LogPath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503-json.log",
	        "Name": "/default-k8s-different-port-20220602180121-283122",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220602180121-283122:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220602180121-283122",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48-init/diff:/var/lib/docker/overlay2/9517781e97ae29bbe1cf38381a601e806342524da1c5d5dc15ba54ec17f204b6/diff:/var/lib/docker/overlay2/28e14d8724bd82b9b6adde55b66e6d1f1be4fa6c3379f2f44ca1138dd648175a/diff:/var/lib/docker/overlay2/24589dfb9ddc941e7cb22b5e38dec69d1af0a29a330d344152ca7f26010354a8/diff:/var/lib/docker/overlay2/66c45d6273a3507b6e0b6b0753ec4fc82f07160fd115626e24a42bda27bbba86/diff:/var/lib/docker/overlay2/a6f0c661bd178a2a0fef034493a114067e8952b8516e69aff1e1e94a75918bbc/diff:/var/lib/docker/overlay2/5b40a045506372be84bfb01d05b068bc4cc926c5eb9e868226c4c386e8a331c7/diff:/var/lib/docker/overlay2/dbf9a9c6e59628984c95ea93ff4ada66c53b22ca3c2f910f70efba895cf40718/diff:/var/lib/docker/overlay2/5eea619393ee989a35a37a13b0633685af91729e1b74f6e73fd94b63c9d7d1fa/diff:/var/lib/docker/overlay2/24de6fbece5386c3658fa3e2c01723e6c1c24fb837d53b3d5a8c75b64ce2cd06/diff:/var/lib/docker/overlay2/04593e
f7d3bf72ee8b7439b0326af5b9818930e0dd48c35dba97e3b4ae39f4ee/diff:/var/lib/docker/overlay2/09310697cecb557085bb4d5dfc810a82fc533759ad98179263b2fc94153a64fa/diff:/var/lib/docker/overlay2/ee75c420f0615a19a5d44cfe042bdca5a2cf0f1c541168ba424140cf2aa7f98d/diff:/var/lib/docker/overlay2/fc8643a3e2060ab0365cd1227ab9ebbd5a573e6ebf8379bc6b64a756a1f66562/diff:/var/lib/docker/overlay2/0f8571cab87b35114fab495cbde9af8ac7e58be99412da82a79c90239e2227e1/diff:/var/lib/docker/overlay2/e1a5ae079e4f566081e5e786955257c23809c8719ea6b4593c8c91ac00380379/diff:/var/lib/docker/overlay2/373565e63dec12c85906e80b4a030f7d852992a756749be9424ac17802d589c0/diff:/var/lib/docker/overlay2/b886b42aa5e9f06b4381a6f58d55338774a2d02af2e91e3de156aa4bca4e4be0/diff:/var/lib/docker/overlay2/dee534a744ce54d08ca752b7f64e1772c063d4ba1e008c5a9773188bb009c377/diff:/var/lib/docker/overlay2/c8cf6667f29ffc5417675e4bd8eee6a91468de13292e3cb6ae2c70d3ad56d5fd/diff:/var/lib/docker/overlay2/072b57a4228c3928bdb27162809b102b485bb94c5f726c9a3e9892267ed30839/diff:/var/lib/d
ocker/overlay2/491d5b51bf6de1cfc4eef288c454f54e3b424e3b83fd9dc216c35891c7923f55/diff:/var/lib/docker/overlay2/78f167c2cc286e4058ef09d5a719437afddde854aac41b8d054567865fa61024/diff:/var/lib/docker/overlay2/10763b3ba26983ed9426ccca08bfceb7037d95c69827847e2ff755b4f2c5a4ab/diff:/var/lib/docker/overlay2/e559098efb21164d96bd0aee50fee9f48310df68887c5c884f4ba3f3fc895260/diff:/var/lib/docker/overlay2/555ef1505fe2b1ab4244f073e9212f269d70d8cd6ca0cd517d21cc80cd25b4c1/diff:/var/lib/docker/overlay2/0201da7c907a1f8f564cda48c3f3a403e484d901739d2584f1343a7907685c5e/diff:/var/lib/docker/overlay2/0abae5d84f8497a5114349421415431c43a34015ad98f6c9c3140143d2f1f383/diff:/var/lib/docker/overlay2/80c37982eef2dc00bd08f208551df377570134af84008156f6a500855fd2ba10/diff:/var/lib/docker/overlay2/cc38481bb11be97cd54c0032709337a491580c2f0ae6d3f7eec4c3616f8e3126/diff:/var/lib/docker/overlay2/dda978aeb4c7317bafbc077229c4417a4d8e7658d9604803c401448b471eab5e/diff:/var/lib/docker/overlay2/7c230ae0970689e2932cdaadc9e3f86ec8215fbf384c9eb59cfb958daf7
3197e/diff:/var/lib/docker/overlay2/145db51b169211e46c3bbab5ca998b2a019a4a813801acc2dae9b2dc09bc2ecc/diff:/var/lib/docker/overlay2/e604163672a013549c59bc042d1b932a3dad9e104d7079fbbb3ca24c29351c0d/diff:/var/lib/docker/overlay2/5a304d8680fd3fb594754173a70646923e2715ed21c98b8c43e1f1ef2d7abd6a/diff:/var/lib/docker/overlay2/b77f00351ddd70be2f3d9dd622b78e1a2937c4ba71585b357fef88d285eb762e/diff:/var/lib/docker/overlay2/196b6ae042b9b08748253c16f5e2b56d7f9a25077008d3d447cbc75447ed5e2a/diff:/var/lib/docker/overlay2/8c7c3fbb0bed8f7440d11104257806c70d16fd7d432311fc208d5012a439e5a1/diff:/var/lib/docker/overlay2/10604aa3b8ed5698c9fa8f55e00b9a9dc27d6b99135255d6c756fc080e615fc6/diff:/var/lib/docker/overlay2/c6602a17e9b0ab1d84297109204b48e60f293952fbc1feaf362895ec86860ead/diff:/var/lib/docker/overlay2/3828c5e50db9be2b4bc30fce040392ea735da5eaa819ad0848cb4fb79fc367da/diff:/var/lib/docker/overlay2/708816f20892c0e4b94cbe8e80b169fff54ffd870bcde395bd8163dd03b25d0f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220602180121-283122",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220602180121-283122/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220602180121-283122",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220602180121-283122",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220602180121-283122",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3989e814c0ac91c6e28c041e22d15c3ab28ed2a1bbec4bcc9b0a154e6c83dc06",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/3989e814c0ac",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220602180121-283122": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c65299b1d424",
	                        "default-k8s-different-port-20220602180121-283122"
	                    ],
	                    "NetworkID": "c9942ce21648740d1266dd81f85155715256e5c54e7d23ee69de43962d50f431",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220602180121-283122 -n default-k8s-different-port-20220602180121-283122
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220602180121-283122 -n default-k8s-different-port-20220602180121-283122: exit status 7 (104.101092ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220602180121-283122" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:290: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-different-port-20220602180121-283122" does not exist
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context default-k8s-different-port-20220602180121-283122 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:293: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220602180121-283122 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (43.467582ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-different-port-20220602180121-283122" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:295: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-different-port-20220602180121-283122 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:299: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220602180121-283122
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220602180121-283122:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503",
	        "Created": "2022-06-02T18:01:28.65338102Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "network c9942ce21648740d1266dd81f85155715256e5c54e7d23ee69de43962d50f431 not found",
	            "StartedAt": "2022-06-02T18:01:29.039934789Z",
	            "FinishedAt": "2022-06-02T18:06:30.797930837Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/hostname",
	        "HostsPath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/hosts",
	        "LogPath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503-json.log",
	        "Name": "/default-k8s-different-port-20220602180121-283122",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220602180121-283122:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220602180121-283122",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48-init/diff:/var/lib/docker/overlay2/9517781e97ae29bbe1cf38381a601e806342524da1c5d5dc15ba54ec17f204b6/diff:/var/lib/docker/overlay2/28e14d8724bd82b9b6adde55b66e6d1f1be4fa6c3379f2f44ca1138dd648175a/diff:/var/lib/docker/overlay2/24589dfb9ddc941e7cb22b5e38dec69d1af0a29a330d344152ca7f26010354a8/diff:/var/lib/docker/overlay2/66c45d6273a3507b6e0b6b0753ec4fc82f07160fd115626e24a42bda27bbba86/diff:/var/lib/docker/overlay2/a6f0c661bd178a2a0fef034493a114067e8952b8516e69aff1e1e94a75918bbc/diff:/var/lib/docker/overlay2/5b40a045506372be84bfb01d05b068bc4cc926c5eb9e868226c4c386e8a331c7/diff:/var/lib/docker/overlay2/dbf9a9c6e59628984c95ea93ff4ada66c53b22ca3c2f910f70efba895cf40718/diff:/var/lib/docker/overlay2/5eea619393ee989a35a37a13b0633685af91729e1b74f6e73fd94b63c9d7d1fa/diff:/var/lib/docker/overlay2/24de6fbece5386c3658fa3e2c01723e6c1c24fb837d53b3d5a8c75b64ce2cd06/diff:/var/lib/docker/overlay2/04593e
f7d3bf72ee8b7439b0326af5b9818930e0dd48c35dba97e3b4ae39f4ee/diff:/var/lib/docker/overlay2/09310697cecb557085bb4d5dfc810a82fc533759ad98179263b2fc94153a64fa/diff:/var/lib/docker/overlay2/ee75c420f0615a19a5d44cfe042bdca5a2cf0f1c541168ba424140cf2aa7f98d/diff:/var/lib/docker/overlay2/fc8643a3e2060ab0365cd1227ab9ebbd5a573e6ebf8379bc6b64a756a1f66562/diff:/var/lib/docker/overlay2/0f8571cab87b35114fab495cbde9af8ac7e58be99412da82a79c90239e2227e1/diff:/var/lib/docker/overlay2/e1a5ae079e4f566081e5e786955257c23809c8719ea6b4593c8c91ac00380379/diff:/var/lib/docker/overlay2/373565e63dec12c85906e80b4a030f7d852992a756749be9424ac17802d589c0/diff:/var/lib/docker/overlay2/b886b42aa5e9f06b4381a6f58d55338774a2d02af2e91e3de156aa4bca4e4be0/diff:/var/lib/docker/overlay2/dee534a744ce54d08ca752b7f64e1772c063d4ba1e008c5a9773188bb009c377/diff:/var/lib/docker/overlay2/c8cf6667f29ffc5417675e4bd8eee6a91468de13292e3cb6ae2c70d3ad56d5fd/diff:/var/lib/docker/overlay2/072b57a4228c3928bdb27162809b102b485bb94c5f726c9a3e9892267ed30839/diff:/var/lib/d
ocker/overlay2/491d5b51bf6de1cfc4eef288c454f54e3b424e3b83fd9dc216c35891c7923f55/diff:/var/lib/docker/overlay2/78f167c2cc286e4058ef09d5a719437afddde854aac41b8d054567865fa61024/diff:/var/lib/docker/overlay2/10763b3ba26983ed9426ccca08bfceb7037d95c69827847e2ff755b4f2c5a4ab/diff:/var/lib/docker/overlay2/e559098efb21164d96bd0aee50fee9f48310df68887c5c884f4ba3f3fc895260/diff:/var/lib/docker/overlay2/555ef1505fe2b1ab4244f073e9212f269d70d8cd6ca0cd517d21cc80cd25b4c1/diff:/var/lib/docker/overlay2/0201da7c907a1f8f564cda48c3f3a403e484d901739d2584f1343a7907685c5e/diff:/var/lib/docker/overlay2/0abae5d84f8497a5114349421415431c43a34015ad98f6c9c3140143d2f1f383/diff:/var/lib/docker/overlay2/80c37982eef2dc00bd08f208551df377570134af84008156f6a500855fd2ba10/diff:/var/lib/docker/overlay2/cc38481bb11be97cd54c0032709337a491580c2f0ae6d3f7eec4c3616f8e3126/diff:/var/lib/docker/overlay2/dda978aeb4c7317bafbc077229c4417a4d8e7658d9604803c401448b471eab5e/diff:/var/lib/docker/overlay2/7c230ae0970689e2932cdaadc9e3f86ec8215fbf384c9eb59cfb958daf7
3197e/diff:/var/lib/docker/overlay2/145db51b169211e46c3bbab5ca998b2a019a4a813801acc2dae9b2dc09bc2ecc/diff:/var/lib/docker/overlay2/e604163672a013549c59bc042d1b932a3dad9e104d7079fbbb3ca24c29351c0d/diff:/var/lib/docker/overlay2/5a304d8680fd3fb594754173a70646923e2715ed21c98b8c43e1f1ef2d7abd6a/diff:/var/lib/docker/overlay2/b77f00351ddd70be2f3d9dd622b78e1a2937c4ba71585b357fef88d285eb762e/diff:/var/lib/docker/overlay2/196b6ae042b9b08748253c16f5e2b56d7f9a25077008d3d447cbc75447ed5e2a/diff:/var/lib/docker/overlay2/8c7c3fbb0bed8f7440d11104257806c70d16fd7d432311fc208d5012a439e5a1/diff:/var/lib/docker/overlay2/10604aa3b8ed5698c9fa8f55e00b9a9dc27d6b99135255d6c756fc080e615fc6/diff:/var/lib/docker/overlay2/c6602a17e9b0ab1d84297109204b48e60f293952fbc1feaf362895ec86860ead/diff:/var/lib/docker/overlay2/3828c5e50db9be2b4bc30fce040392ea735da5eaa819ad0848cb4fb79fc367da/diff:/var/lib/docker/overlay2/708816f20892c0e4b94cbe8e80b169fff54ffd870bcde395bd8163dd03b25d0f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220602180121-283122",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220602180121-283122/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220602180121-283122",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220602180121-283122",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220602180121-283122",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3989e814c0ac91c6e28c041e22d15c3ab28ed2a1bbec4bcc9b0a154e6c83dc06",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/3989e814c0ac",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220602180121-283122": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c65299b1d424",
	                        "default-k8s-different-port-20220602180121-283122"
	                    ],
	                    "NetworkID": "c9942ce21648740d1266dd81f85155715256e5c54e7d23ee69de43962d50f431",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220602180121-283122 -n default-k8s-different-port-20220602180121-283122
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220602180121-283122 -n default-k8s-different-port-20220602180121-283122: exit status 7 (106.007056ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220602180121-283122" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20220602180121-283122 "sudo crictl images -o json"
start_stop_delete_test.go:306: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p default-k8s-different-port-20220602180121-283122 "sudo crictl images -o json": exit status 89 (116.549407ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220602180121-283122"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:306: failed tp get images inside minikube. args "out/minikube-linux-amd64 ssh -p default-k8s-different-port-20220602180121-283122 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:306: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-different-port-20220602180121-283122"
start_stop_delete_test.go:306: v1.23.6 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns/coredns:v1.8.6",
- 	"k8s.gcr.io/etcd:3.5.1-0",
- 	"k8s.gcr.io/kube-apiserver:v1.23.6",
- 	"k8s.gcr.io/kube-controller-manager:v1.23.6",
- 	"k8s.gcr.io/kube-proxy:v1.23.6",
- 	"k8s.gcr.io/kube-scheduler:v1.23.6",
- 	"k8s.gcr.io/pause:3.6",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220602180121-283122
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220602180121-283122:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503",
	        "Created": "2022-06-02T18:01:28.65338102Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "network c9942ce21648740d1266dd81f85155715256e5c54e7d23ee69de43962d50f431 not found",
	            "StartedAt": "2022-06-02T18:01:29.039934789Z",
	            "FinishedAt": "2022-06-02T18:06:30.797930837Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/hostname",
	        "HostsPath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/hosts",
	        "LogPath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503-json.log",
	        "Name": "/default-k8s-different-port-20220602180121-283122",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220602180121-283122:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220602180121-283122",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48-init/diff:/var/lib/docker/overlay2/9517781e97ae29bbe1cf38381a601e806342524da1c5d5dc15ba54ec17f204b6/diff:/var/lib/docker/overlay2/28e14d8724bd82b9b6adde55b66e6d1f1be4fa6c3379f2f44ca1138dd648175a/diff:/var/lib/docker/overlay2/24589dfb9ddc941e7cb22b5e38dec69d1af0a29a330d344152ca7f26010354a8/diff:/var/lib/docker/overlay2/66c45d6273a3507b6e0b6b0753ec4fc82f07160fd115626e24a42bda27bbba86/diff:/var/lib/docker/overlay2/a6f0c661bd178a2a0fef034493a114067e8952b8516e69aff1e1e94a75918bbc/diff:/var/lib/docker/overlay2/5b40a045506372be84bfb01d05b068bc4cc926c5eb9e868226c4c386e8a331c7/diff:/var/lib/docker/overlay2/dbf9a9c6e59628984c95ea93ff4ada66c53b22ca3c2f910f70efba895cf40718/diff:/var/lib/docker/overlay2/5eea619393ee989a35a37a13b0633685af91729e1b74f6e73fd94b63c9d7d1fa/diff:/var/lib/docker/overlay2/24de6fbece5386c3658fa3e2c01723e6c1c24fb837d53b3d5a8c75b64ce2cd06/diff:/var/lib/docker/overlay2/04593e
f7d3bf72ee8b7439b0326af5b9818930e0dd48c35dba97e3b4ae39f4ee/diff:/var/lib/docker/overlay2/09310697cecb557085bb4d5dfc810a82fc533759ad98179263b2fc94153a64fa/diff:/var/lib/docker/overlay2/ee75c420f0615a19a5d44cfe042bdca5a2cf0f1c541168ba424140cf2aa7f98d/diff:/var/lib/docker/overlay2/fc8643a3e2060ab0365cd1227ab9ebbd5a573e6ebf8379bc6b64a756a1f66562/diff:/var/lib/docker/overlay2/0f8571cab87b35114fab495cbde9af8ac7e58be99412da82a79c90239e2227e1/diff:/var/lib/docker/overlay2/e1a5ae079e4f566081e5e786955257c23809c8719ea6b4593c8c91ac00380379/diff:/var/lib/docker/overlay2/373565e63dec12c85906e80b4a030f7d852992a756749be9424ac17802d589c0/diff:/var/lib/docker/overlay2/b886b42aa5e9f06b4381a6f58d55338774a2d02af2e91e3de156aa4bca4e4be0/diff:/var/lib/docker/overlay2/dee534a744ce54d08ca752b7f64e1772c063d4ba1e008c5a9773188bb009c377/diff:/var/lib/docker/overlay2/c8cf6667f29ffc5417675e4bd8eee6a91468de13292e3cb6ae2c70d3ad56d5fd/diff:/var/lib/docker/overlay2/072b57a4228c3928bdb27162809b102b485bb94c5f726c9a3e9892267ed30839/diff:/var/lib/d
ocker/overlay2/491d5b51bf6de1cfc4eef288c454f54e3b424e3b83fd9dc216c35891c7923f55/diff:/var/lib/docker/overlay2/78f167c2cc286e4058ef09d5a719437afddde854aac41b8d054567865fa61024/diff:/var/lib/docker/overlay2/10763b3ba26983ed9426ccca08bfceb7037d95c69827847e2ff755b4f2c5a4ab/diff:/var/lib/docker/overlay2/e559098efb21164d96bd0aee50fee9f48310df68887c5c884f4ba3f3fc895260/diff:/var/lib/docker/overlay2/555ef1505fe2b1ab4244f073e9212f269d70d8cd6ca0cd517d21cc80cd25b4c1/diff:/var/lib/docker/overlay2/0201da7c907a1f8f564cda48c3f3a403e484d901739d2584f1343a7907685c5e/diff:/var/lib/docker/overlay2/0abae5d84f8497a5114349421415431c43a34015ad98f6c9c3140143d2f1f383/diff:/var/lib/docker/overlay2/80c37982eef2dc00bd08f208551df377570134af84008156f6a500855fd2ba10/diff:/var/lib/docker/overlay2/cc38481bb11be97cd54c0032709337a491580c2f0ae6d3f7eec4c3616f8e3126/diff:/var/lib/docker/overlay2/dda978aeb4c7317bafbc077229c4417a4d8e7658d9604803c401448b471eab5e/diff:/var/lib/docker/overlay2/7c230ae0970689e2932cdaadc9e3f86ec8215fbf384c9eb59cfb958daf7
3197e/diff:/var/lib/docker/overlay2/145db51b169211e46c3bbab5ca998b2a019a4a813801acc2dae9b2dc09bc2ecc/diff:/var/lib/docker/overlay2/e604163672a013549c59bc042d1b932a3dad9e104d7079fbbb3ca24c29351c0d/diff:/var/lib/docker/overlay2/5a304d8680fd3fb594754173a70646923e2715ed21c98b8c43e1f1ef2d7abd6a/diff:/var/lib/docker/overlay2/b77f00351ddd70be2f3d9dd622b78e1a2937c4ba71585b357fef88d285eb762e/diff:/var/lib/docker/overlay2/196b6ae042b9b08748253c16f5e2b56d7f9a25077008d3d447cbc75447ed5e2a/diff:/var/lib/docker/overlay2/8c7c3fbb0bed8f7440d11104257806c70d16fd7d432311fc208d5012a439e5a1/diff:/var/lib/docker/overlay2/10604aa3b8ed5698c9fa8f55e00b9a9dc27d6b99135255d6c756fc080e615fc6/diff:/var/lib/docker/overlay2/c6602a17e9b0ab1d84297109204b48e60f293952fbc1feaf362895ec86860ead/diff:/var/lib/docker/overlay2/3828c5e50db9be2b4bc30fce040392ea735da5eaa819ad0848cb4fb79fc367da/diff:/var/lib/docker/overlay2/708816f20892c0e4b94cbe8e80b169fff54ffd870bcde395bd8163dd03b25d0f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220602180121-283122",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220602180121-283122/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220602180121-283122",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220602180121-283122",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220602180121-283122",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3989e814c0ac91c6e28c041e22d15c3ab28ed2a1bbec4bcc9b0a154e6c83dc06",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/3989e814c0ac",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220602180121-283122": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c65299b1d424",
	                        "default-k8s-different-port-20220602180121-283122"
	                    ],
	                    "NetworkID": "c9942ce21648740d1266dd81f85155715256e5c54e7d23ee69de43962d50f431",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220602180121-283122 -n default-k8s-different-port-20220602180121-283122
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220602180121-283122 -n default-k8s-different-port-20220602180121-283122: exit status 7 (110.488227ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220602180121-283122" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20220602180121-283122 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-different-port-20220602180121-283122 --alsologtostderr -v=1: exit status 89 (121.17057ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-different-port-20220602180121-283122"

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 18:06:45.647000  572796 out.go:296] Setting OutFile to fd 1 ...
	I0602 18:06:45.647286  572796 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 18:06:45.647301  572796 out.go:309] Setting ErrFile to fd 2...
	I0602 18:06:45.647308  572796 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 18:06:45.647492  572796 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 18:06:45.647741  572796 out.go:303] Setting JSON to false
	I0602 18:06:45.647778  572796 mustload.go:65] Loading cluster: default-k8s-different-port-20220602180121-283122
	I0602 18:06:45.648267  572796 config.go:178] Loaded profile config "default-k8s-different-port-20220602180121-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 18:06:45.648895  572796 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220602180121-283122 --format={{.State.Status}}
	I0602 18:06:45.691641  572796 out.go:177] * The control plane node must be running for this command
	I0602 18:06:45.693878  572796 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-different-port-20220602180121-283122"

                                                
                                                
** /stderr **
start_stop_delete_test.go:313: out/minikube-linux-amd64 pause -p default-k8s-different-port-20220602180121-283122 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220602180121-283122
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220602180121-283122:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503",
	        "Created": "2022-06-02T18:01:28.65338102Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "network c9942ce21648740d1266dd81f85155715256e5c54e7d23ee69de43962d50f431 not found",
	            "StartedAt": "2022-06-02T18:01:29.039934789Z",
	            "FinishedAt": "2022-06-02T18:06:30.797930837Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/hostname",
	        "HostsPath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/hosts",
	        "LogPath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503-json.log",
	        "Name": "/default-k8s-different-port-20220602180121-283122",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220602180121-283122:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220602180121-283122",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48-init/diff:/var/lib/docker/overlay2/9517781e97ae29bbe1cf38381a601e806342524da1c5d5dc15ba54ec17f204b6/diff:/var/lib/docker/overlay2/28e14d8724bd82b9b6adde55b66e6d1f1be4fa6c3379f2f44ca1138dd648175a/diff:/var/lib/docker/overlay2/24589dfb9ddc941e7cb22b5e38dec69d1af0a29a330d344152ca7f26010354a8/diff:/var/lib/docker/overlay2/66c45d6273a3507b6e0b6b0753ec4fc82f07160fd115626e24a42bda27bbba86/diff:/var/lib/docker/overlay2/a6f0c661bd178a2a0fef034493a114067e8952b8516e69aff1e1e94a75918bbc/diff:/var/lib/docker/overlay2/5b40a045506372be84bfb01d05b068bc4cc926c5eb9e868226c4c386e8a331c7/diff:/var/lib/docker/overlay2/dbf9a9c6e59628984c95ea93ff4ada66c53b22ca3c2f910f70efba895cf40718/diff:/var/lib/docker/overlay2/5eea619393ee989a35a37a13b0633685af91729e1b74f6e73fd94b63c9d7d1fa/diff:/var/lib/docker/overlay2/24de6fbece5386c3658fa3e2c01723e6c1c24fb837d53b3d5a8c75b64ce2cd06/diff:/var/lib/docker/overlay2/04593e
f7d3bf72ee8b7439b0326af5b9818930e0dd48c35dba97e3b4ae39f4ee/diff:/var/lib/docker/overlay2/09310697cecb557085bb4d5dfc810a82fc533759ad98179263b2fc94153a64fa/diff:/var/lib/docker/overlay2/ee75c420f0615a19a5d44cfe042bdca5a2cf0f1c541168ba424140cf2aa7f98d/diff:/var/lib/docker/overlay2/fc8643a3e2060ab0365cd1227ab9ebbd5a573e6ebf8379bc6b64a756a1f66562/diff:/var/lib/docker/overlay2/0f8571cab87b35114fab495cbde9af8ac7e58be99412da82a79c90239e2227e1/diff:/var/lib/docker/overlay2/e1a5ae079e4f566081e5e786955257c23809c8719ea6b4593c8c91ac00380379/diff:/var/lib/docker/overlay2/373565e63dec12c85906e80b4a030f7d852992a756749be9424ac17802d589c0/diff:/var/lib/docker/overlay2/b886b42aa5e9f06b4381a6f58d55338774a2d02af2e91e3de156aa4bca4e4be0/diff:/var/lib/docker/overlay2/dee534a744ce54d08ca752b7f64e1772c063d4ba1e008c5a9773188bb009c377/diff:/var/lib/docker/overlay2/c8cf6667f29ffc5417675e4bd8eee6a91468de13292e3cb6ae2c70d3ad56d5fd/diff:/var/lib/docker/overlay2/072b57a4228c3928bdb27162809b102b485bb94c5f726c9a3e9892267ed30839/diff:/var/lib/d
ocker/overlay2/491d5b51bf6de1cfc4eef288c454f54e3b424e3b83fd9dc216c35891c7923f55/diff:/var/lib/docker/overlay2/78f167c2cc286e4058ef09d5a719437afddde854aac41b8d054567865fa61024/diff:/var/lib/docker/overlay2/10763b3ba26983ed9426ccca08bfceb7037d95c69827847e2ff755b4f2c5a4ab/diff:/var/lib/docker/overlay2/e559098efb21164d96bd0aee50fee9f48310df68887c5c884f4ba3f3fc895260/diff:/var/lib/docker/overlay2/555ef1505fe2b1ab4244f073e9212f269d70d8cd6ca0cd517d21cc80cd25b4c1/diff:/var/lib/docker/overlay2/0201da7c907a1f8f564cda48c3f3a403e484d901739d2584f1343a7907685c5e/diff:/var/lib/docker/overlay2/0abae5d84f8497a5114349421415431c43a34015ad98f6c9c3140143d2f1f383/diff:/var/lib/docker/overlay2/80c37982eef2dc00bd08f208551df377570134af84008156f6a500855fd2ba10/diff:/var/lib/docker/overlay2/cc38481bb11be97cd54c0032709337a491580c2f0ae6d3f7eec4c3616f8e3126/diff:/var/lib/docker/overlay2/dda978aeb4c7317bafbc077229c4417a4d8e7658d9604803c401448b471eab5e/diff:/var/lib/docker/overlay2/7c230ae0970689e2932cdaadc9e3f86ec8215fbf384c9eb59cfb958daf7
3197e/diff:/var/lib/docker/overlay2/145db51b169211e46c3bbab5ca998b2a019a4a813801acc2dae9b2dc09bc2ecc/diff:/var/lib/docker/overlay2/e604163672a013549c59bc042d1b932a3dad9e104d7079fbbb3ca24c29351c0d/diff:/var/lib/docker/overlay2/5a304d8680fd3fb594754173a70646923e2715ed21c98b8c43e1f1ef2d7abd6a/diff:/var/lib/docker/overlay2/b77f00351ddd70be2f3d9dd622b78e1a2937c4ba71585b357fef88d285eb762e/diff:/var/lib/docker/overlay2/196b6ae042b9b08748253c16f5e2b56d7f9a25077008d3d447cbc75447ed5e2a/diff:/var/lib/docker/overlay2/8c7c3fbb0bed8f7440d11104257806c70d16fd7d432311fc208d5012a439e5a1/diff:/var/lib/docker/overlay2/10604aa3b8ed5698c9fa8f55e00b9a9dc27d6b99135255d6c756fc080e615fc6/diff:/var/lib/docker/overlay2/c6602a17e9b0ab1d84297109204b48e60f293952fbc1feaf362895ec86860ead/diff:/var/lib/docker/overlay2/3828c5e50db9be2b4bc30fce040392ea735da5eaa819ad0848cb4fb79fc367da/diff:/var/lib/docker/overlay2/708816f20892c0e4b94cbe8e80b169fff54ffd870bcde395bd8163dd03b25d0f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220602180121-283122",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220602180121-283122/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220602180121-283122",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220602180121-283122",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220602180121-283122",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3989e814c0ac91c6e28c041e22d15c3ab28ed2a1bbec4bcc9b0a154e6c83dc06",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/3989e814c0ac",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220602180121-283122": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c65299b1d424",
	                        "default-k8s-different-port-20220602180121-283122"
	                    ],
	                    "NetworkID": "c9942ce21648740d1266dd81f85155715256e5c54e7d23ee69de43962d50f431",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220602180121-283122 -n default-k8s-different-port-20220602180121-283122
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220602180121-283122 -n default-k8s-different-port-20220602180121-283122: exit status 7 (121.293302ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220602180121-283122" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220602180121-283122
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220602180121-283122:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503",
	        "Created": "2022-06-02T18:01:28.65338102Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "network c9942ce21648740d1266dd81f85155715256e5c54e7d23ee69de43962d50f431 not found",
	            "StartedAt": "2022-06-02T18:01:29.039934789Z",
	            "FinishedAt": "2022-06-02T18:06:30.797930837Z"
	        },
	        "Image": "sha256:462790409a0e2520bcd6c4a009da80732d87146c973d78561764290fa691ea41",
	        "ResolvConfPath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/hostname",
	        "HostsPath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/hosts",
	        "LogPath": "/var/lib/docker/containers/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503/c65299b1d424e4e7490808851eaba21434e3d32dea8d5f830e5a875aaf1ad503-json.log",
	        "Name": "/default-k8s-different-port-20220602180121-283122",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220602180121-283122:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220602180121-283122",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48-init/diff:/var/lib/docker/overlay2/9517781e97ae29bbe1cf38381a601e806342524da1c5d5dc15ba54ec17f204b6/diff:/var/lib/docker/overlay2/28e14d8724bd82b9b6adde55b66e6d1f1be4fa6c3379f2f44ca1138dd648175a/diff:/var/lib/docker/overlay2/24589dfb9ddc941e7cb22b5e38dec69d1af0a29a330d344152ca7f26010354a8/diff:/var/lib/docker/overlay2/66c45d6273a3507b6e0b6b0753ec4fc82f07160fd115626e24a42bda27bbba86/diff:/var/lib/docker/overlay2/a6f0c661bd178a2a0fef034493a114067e8952b8516e69aff1e1e94a75918bbc/diff:/var/lib/docker/overlay2/5b40a045506372be84bfb01d05b068bc4cc926c5eb9e868226c4c386e8a331c7/diff:/var/lib/docker/overlay2/dbf9a9c6e59628984c95ea93ff4ada66c53b22ca3c2f910f70efba895cf40718/diff:/var/lib/docker/overlay2/5eea619393ee989a35a37a13b0633685af91729e1b74f6e73fd94b63c9d7d1fa/diff:/var/lib/docker/overlay2/24de6fbece5386c3658fa3e2c01723e6c1c24fb837d53b3d5a8c75b64ce2cd06/diff:/var/lib/docker/overlay2/04593e
f7d3bf72ee8b7439b0326af5b9818930e0dd48c35dba97e3b4ae39f4ee/diff:/var/lib/docker/overlay2/09310697cecb557085bb4d5dfc810a82fc533759ad98179263b2fc94153a64fa/diff:/var/lib/docker/overlay2/ee75c420f0615a19a5d44cfe042bdca5a2cf0f1c541168ba424140cf2aa7f98d/diff:/var/lib/docker/overlay2/fc8643a3e2060ab0365cd1227ab9ebbd5a573e6ebf8379bc6b64a756a1f66562/diff:/var/lib/docker/overlay2/0f8571cab87b35114fab495cbde9af8ac7e58be99412da82a79c90239e2227e1/diff:/var/lib/docker/overlay2/e1a5ae079e4f566081e5e786955257c23809c8719ea6b4593c8c91ac00380379/diff:/var/lib/docker/overlay2/373565e63dec12c85906e80b4a030f7d852992a756749be9424ac17802d589c0/diff:/var/lib/docker/overlay2/b886b42aa5e9f06b4381a6f58d55338774a2d02af2e91e3de156aa4bca4e4be0/diff:/var/lib/docker/overlay2/dee534a744ce54d08ca752b7f64e1772c063d4ba1e008c5a9773188bb009c377/diff:/var/lib/docker/overlay2/c8cf6667f29ffc5417675e4bd8eee6a91468de13292e3cb6ae2c70d3ad56d5fd/diff:/var/lib/docker/overlay2/072b57a4228c3928bdb27162809b102b485bb94c5f726c9a3e9892267ed30839/diff:/var/lib/d
ocker/overlay2/491d5b51bf6de1cfc4eef288c454f54e3b424e3b83fd9dc216c35891c7923f55/diff:/var/lib/docker/overlay2/78f167c2cc286e4058ef09d5a719437afddde854aac41b8d054567865fa61024/diff:/var/lib/docker/overlay2/10763b3ba26983ed9426ccca08bfceb7037d95c69827847e2ff755b4f2c5a4ab/diff:/var/lib/docker/overlay2/e559098efb21164d96bd0aee50fee9f48310df68887c5c884f4ba3f3fc895260/diff:/var/lib/docker/overlay2/555ef1505fe2b1ab4244f073e9212f269d70d8cd6ca0cd517d21cc80cd25b4c1/diff:/var/lib/docker/overlay2/0201da7c907a1f8f564cda48c3f3a403e484d901739d2584f1343a7907685c5e/diff:/var/lib/docker/overlay2/0abae5d84f8497a5114349421415431c43a34015ad98f6c9c3140143d2f1f383/diff:/var/lib/docker/overlay2/80c37982eef2dc00bd08f208551df377570134af84008156f6a500855fd2ba10/diff:/var/lib/docker/overlay2/cc38481bb11be97cd54c0032709337a491580c2f0ae6d3f7eec4c3616f8e3126/diff:/var/lib/docker/overlay2/dda978aeb4c7317bafbc077229c4417a4d8e7658d9604803c401448b471eab5e/diff:/var/lib/docker/overlay2/7c230ae0970689e2932cdaadc9e3f86ec8215fbf384c9eb59cfb958daf7
3197e/diff:/var/lib/docker/overlay2/145db51b169211e46c3bbab5ca998b2a019a4a813801acc2dae9b2dc09bc2ecc/diff:/var/lib/docker/overlay2/e604163672a013549c59bc042d1b932a3dad9e104d7079fbbb3ca24c29351c0d/diff:/var/lib/docker/overlay2/5a304d8680fd3fb594754173a70646923e2715ed21c98b8c43e1f1ef2d7abd6a/diff:/var/lib/docker/overlay2/b77f00351ddd70be2f3d9dd622b78e1a2937c4ba71585b357fef88d285eb762e/diff:/var/lib/docker/overlay2/196b6ae042b9b08748253c16f5e2b56d7f9a25077008d3d447cbc75447ed5e2a/diff:/var/lib/docker/overlay2/8c7c3fbb0bed8f7440d11104257806c70d16fd7d432311fc208d5012a439e5a1/diff:/var/lib/docker/overlay2/10604aa3b8ed5698c9fa8f55e00b9a9dc27d6b99135255d6c756fc080e615fc6/diff:/var/lib/docker/overlay2/c6602a17e9b0ab1d84297109204b48e60f293952fbc1feaf362895ec86860ead/diff:/var/lib/docker/overlay2/3828c5e50db9be2b4bc30fce040392ea735da5eaa819ad0848cb4fb79fc367da/diff:/var/lib/docker/overlay2/708816f20892c0e4b94cbe8e80b169fff54ffd870bcde395bd8163dd03b25d0f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4f5db155f4cdcebb3fa23d55c9ebd42bf15a85efabf14b7eb067c04a8e6b0b48/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220602180121-283122",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220602180121-283122/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220602180121-283122",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220602180121-283122",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220602180121-283122",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3989e814c0ac91c6e28c041e22d15c3ab28ed2a1bbec4bcc9b0a154e6c83dc06",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/3989e814c0ac",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220602180121-283122": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c65299b1d424",
	                        "default-k8s-different-port-20220602180121-283122"
	                    ],
	                    "NetworkID": "c9942ce21648740d1266dd81f85155715256e5c54e7d23ee69de43962d50f431",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220602180121-283122 -n default-k8s-different-port-20220602180121-283122
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220602180121-283122 -n default-k8s-different-port-20220602180121-283122: exit status 7 (106.604444ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-different-port-20220602180121-283122" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (314.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default
E0602 18:11:07.942246  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
E0602 18:11:11.421278  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602180121-283122/client.crt: no such file or directory
E0602 18:11:11.426619  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602180121-283122/client.crt: no such file or directory
E0602 18:11:11.437004  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602180121-283122/client.crt: no such file or directory
E0602 18:11:11.457372  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602180121-283122/client.crt: no such file or directory
E0602 18:11:11.497781  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602180121-283122/client.crt: no such file or directory
E0602 18:11:11.578215  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602180121-283122/client.crt: no such file or directory
E0602 18:11:11.739359  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602180121-283122/client.crt: no such file or directory
E0602 18:11:12.059973  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602180121-283122/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context calico-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.159708501s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0602 18:11:12.700463  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602180121-283122/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context calico-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default
E0602 18:11:13.981531  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602180121-283122/client.crt: no such file or directory
E0602 18:11:16.541694  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602180121-283122/client.crt: no such file or directory
E0602 18:11:21.662250  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602180121-283122/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context calico-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140805387s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0602 18:11:28.581173  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602175916-283122/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context calico-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default
E0602 18:11:31.903446  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602180121-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context calico-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.141363196s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context calico-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context calico-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.14302714s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context calico-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.14897184s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default
E0602 18:12:33.345217  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602180121-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context calico-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.153295462s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context calico-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.151728927s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context calico-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.150914056s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default
E0602 18:13:55.265892  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602180121-283122/client.crt: no such file or directory
E0602 18:13:55.531376  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context calico-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.144348761s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context calico-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.150778447s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context calico-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.155963795s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context calico-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.160277264s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/calico/DNS (314.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (354.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default
E0602 18:11:52.384536  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602180121-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.157475757s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.167273736s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context auto-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135363176s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default
E0602 18:12:50.501618  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602175916-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13996185s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context auto-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.149567791s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.164794697s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context auto-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140409373s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default
E0602 18:14:20.229661  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/cilium-20220602175747-283122/client.crt: no such file or directory
E0602 18:14:20.234998  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/cilium-20220602175747-283122/client.crt: no such file or directory
E0602 18:14:20.245304  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/cilium-20220602175747-283122/client.crt: no such file or directory
E0602 18:14:20.265681  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/cilium-20220602175747-283122/client.crt: no such file or directory
E0602 18:14:20.306015  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/cilium-20220602175747-283122/client.crt: no such file or directory
E0602 18:14:20.386351  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/cilium-20220602175747-283122/client.crt: no such file or directory
E0602 18:14:20.546848  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/cilium-20220602175747-283122/client.crt: no such file or directory
E0602 18:14:20.867461  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/cilium-20220602175747-283122/client.crt: no such file or directory
E0602 18:14:21.508505  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/cilium-20220602175747-283122/client.crt: no such file or directory
E0602 18:14:22.788979  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/cilium-20220602175747-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.141460525s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0602 18:14:29.022672  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602175344-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.146316574s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.148809014s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.150669365s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context auto-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.149630914s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/auto/DNS (354.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (277.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20220602175747-283122 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kindnet-20220602175747-283122 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker: exit status 80 (4m37.091204415s)

                                                
                                                
-- stdout --
	* [kindnet-20220602175747-283122] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14269
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	* Using Docker driver with the root privilege
	* Starting control plane node kindnet-20220602175747-283122 in cluster kindnet-20220602175747-283122
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 18:15:18.554603  634278 out.go:296] Setting OutFile to fd 1 ...
	I0602 18:15:18.554840  634278 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 18:15:18.554861  634278 out.go:309] Setting ErrFile to fd 2...
	I0602 18:15:18.554868  634278 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 18:15:18.555025  634278 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 18:15:18.555425  634278 out.go:303] Setting JSON to false
	I0602 18:15:18.557693  634278 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":10672,"bootTime":1654183047,"procs":692,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0602 18:15:18.557785  634278 start.go:125] virtualization: kvm guest
	I0602 18:15:18.560776  634278 out.go:177] * [kindnet-20220602175747-283122] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0602 18:15:18.562369  634278 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 18:15:18.562384  634278 notify.go:193] Checking for updates...
	I0602 18:15:18.563884  634278 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 18:15:18.565545  634278 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 18:15:18.567145  634278 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 18:15:18.568631  634278 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0602 18:15:18.570498  634278 config.go:178] Loaded profile config "auto-20220602175746-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 18:15:18.570594  634278 config.go:178] Loaded profile config "calico-20220602175747-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 18:15:18.570666  634278 config.go:178] Loaded profile config "false-20220602175747-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 18:15:18.570747  634278 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 18:15:18.614988  634278 docker.go:137] docker version: linux-20.10.16
	I0602 18:15:18.615117  634278 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 18:15:18.731498  634278 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:71 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-02 18:15:18.648001459 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 18:15:18.731605  634278 docker.go:254] overlay module found
	I0602 18:15:18.734247  634278 out.go:177] * Using the docker driver based on user configuration
	I0602 18:15:18.735935  634278 start.go:284] selected driver: docker
	I0602 18:15:18.735963  634278 start.go:806] validating driver "docker" against <nil>
	I0602 18:15:18.735989  634278 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 18:15:18.737136  634278 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 18:15:18.885298  634278 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:71 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-06-02 18:15:18.769531696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 18:15:18.885463  634278 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0602 18:15:18.885698  634278 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0602 18:15:18.890187  634278 out.go:177] * Using Docker driver with the root privilege
	I0602 18:15:18.891596  634278 cni.go:95] Creating CNI manager for "kindnet"
	I0602 18:15:18.891672  634278 cni.go:225] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0602 18:15:18.891685  634278 cni.go:230] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0602 18:15:18.891693  634278 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0602 18:15:18.891711  634278 start_flags.go:306] config:
	{Name:kindnet-20220602175747-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kindnet-20220602175747-283122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 18:15:18.894629  634278 out.go:177] * Starting control plane node kindnet-20220602175747-283122 in cluster kindnet-20220602175747-283122
	I0602 18:15:18.896267  634278 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 18:15:18.897634  634278 out.go:177] * Pulling base image ...
	I0602 18:15:18.899006  634278 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 18:15:18.899062  634278 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4
	I0602 18:15:18.899077  634278 cache.go:57] Caching tarball of preloaded images
	I0602 18:15:18.899362  634278 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0602 18:15:18.899398  634278 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on docker
	I0602 18:15:18.899554  634278 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/config.json ...
	I0602 18:15:18.899590  634278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/config.json: {Name:mk096e0956fd1f94762b4dc6da5fbe30b021ba88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 18:15:18.905481  634278 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 18:15:18.971734  634278 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 18:15:18.971765  634278 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 exists in daemon, skipping load
	I0602 18:15:18.971779  634278 cache.go:206] Successfully downloaded all kic artifacts
	I0602 18:15:18.971850  634278 start.go:352] acquiring machines lock for kindnet-20220602175747-283122: {Name:mk726a243afe83370fe2ef8fbf7d1985e9db5681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0602 18:15:18.972040  634278 start.go:356] acquired machines lock for "kindnet-20220602175747-283122" in 164.926µs
	I0602 18:15:18.972073  634278 start.go:91] Provisioning new machine with config: &{Name:kindnet-20220602175747-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kindnet-20220602175747-283122 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 18:15:18.972180  634278 start.go:131] createHost starting for "" (driver="docker")
	I0602 18:15:18.974572  634278 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0602 18:15:18.974874  634278 start.go:165] libmachine.API.Create for "kindnet-20220602175747-283122" (driver="docker")
	I0602 18:15:18.974922  634278 client.go:168] LocalClient.Create starting
	I0602 18:15:18.975023  634278 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem
	I0602 18:15:18.975067  634278 main.go:134] libmachine: Decoding PEM data...
	I0602 18:15:18.975088  634278 main.go:134] libmachine: Parsing certificate...
	I0602 18:15:18.975213  634278 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem
	I0602 18:15:18.975248  634278 main.go:134] libmachine: Decoding PEM data...
	I0602 18:15:18.975270  634278 main.go:134] libmachine: Parsing certificate...
	I0602 18:15:18.975753  634278 cli_runner.go:164] Run: docker network inspect kindnet-20220602175747-283122 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0602 18:15:19.011230  634278 cli_runner.go:211] docker network inspect kindnet-20220602175747-283122 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0602 18:15:19.011313  634278 network_create.go:272] running [docker network inspect kindnet-20220602175747-283122] to gather additional debugging logs...
	I0602 18:15:19.011343  634278 cli_runner.go:164] Run: docker network inspect kindnet-20220602175747-283122
	W0602 18:15:19.052395  634278 cli_runner.go:211] docker network inspect kindnet-20220602175747-283122 returned with exit code 1
	I0602 18:15:19.052435  634278 network_create.go:275] error running [docker network inspect kindnet-20220602175747-283122]: docker network inspect kindnet-20220602175747-283122: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220602175747-283122
	I0602 18:15:19.052454  634278 network_create.go:277] output of [docker network inspect kindnet-20220602175747-283122]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220602175747-283122
	
	** /stderr **
	I0602 18:15:19.052544  634278 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 18:15:19.091929  634278 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-bc199d17ff4f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:b9:1f:4a:ef}}
	I0602 18:15:19.092769  634278 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc000a92220] misses:0}
	I0602 18:15:19.092813  634278 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0602 18:15:19.092829  634278 network_create.go:115] attempt to create docker network kindnet-20220602175747-283122 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0602 18:15:19.092884  634278 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220602175747-283122
	I0602 18:15:19.184193  634278 network_create.go:99] docker network kindnet-20220602175747-283122 192.168.58.0/24 created
	I0602 18:15:19.184238  634278 kic.go:106] calculated static IP "192.168.58.2" for the "kindnet-20220602175747-283122" container
	I0602 18:15:19.184313  634278 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0602 18:15:19.220324  634278 cli_runner.go:164] Run: docker volume create kindnet-20220602175747-283122 --label name.minikube.sigs.k8s.io=kindnet-20220602175747-283122 --label created_by.minikube.sigs.k8s.io=true
	I0602 18:15:19.270870  634278 oci.go:103] Successfully created a docker volume kindnet-20220602175747-283122
	I0602 18:15:19.270978  634278 cli_runner.go:164] Run: docker run --rm --name kindnet-20220602175747-283122-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220602175747-283122 --entrypoint /usr/bin/test -v kindnet-20220602175747-283122:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -d /var/lib
	I0602 18:15:19.926421  634278 oci.go:107] Successfully prepared a docker volume kindnet-20220602175747-283122
	I0602 18:15:19.926487  634278 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 18:15:19.926517  634278 kic.go:179] Starting extracting preloaded images to volume ...
	I0602 18:15:19.926619  634278 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220602175747-283122:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir
	I0602 18:15:24.852405  634278 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220602175747-283122:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 -I lz4 -xf /preloaded.tar -C /extractDir: (4.925680971s)
	I0602 18:15:24.852440  634278 kic.go:188] duration metric: took 4.925920 seconds to extract preloaded images to volume
	W0602 18:15:24.852621  634278 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0602 18:15:24.852716  634278 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0602 18:15:24.992864  634278 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20220602175747-283122 --name kindnet-20220602175747-283122 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220602175747-283122 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20220602175747-283122 --network kindnet-20220602175747-283122 --ip 192.168.58.2 --volume kindnet-20220602175747-283122:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496
	I0602 18:15:25.471644  634278 cli_runner.go:164] Run: docker container inspect kindnet-20220602175747-283122 --format={{.State.Running}}
	I0602 18:15:25.511117  634278 cli_runner.go:164] Run: docker container inspect kindnet-20220602175747-283122 --format={{.State.Status}}
	I0602 18:15:25.546359  634278 cli_runner.go:164] Run: docker exec kindnet-20220602175747-283122 stat /var/lib/dpkg/alternatives/iptables
	I0602 18:15:25.640523  634278 oci.go:247] the created container "kindnet-20220602175747-283122" has a running status.
	I0602 18:15:25.640564  634278 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/kindnet-20220602175747-283122/id_rsa...
	I0602 18:15:25.713547  634278 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/kindnet-20220602175747-283122/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0602 18:15:25.823835  634278 cli_runner.go:164] Run: docker container inspect kindnet-20220602175747-283122 --format={{.State.Status}}
	I0602 18:15:25.875616  634278 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0602 18:15:25.875647  634278 kic_runner.go:114] Args: [docker exec --privileged kindnet-20220602175747-283122 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0602 18:15:25.965247  634278 cli_runner.go:164] Run: docker container inspect kindnet-20220602175747-283122 --format={{.State.Status}}
	I0602 18:15:26.010423  634278 machine.go:88] provisioning docker machine ...
	I0602 18:15:26.010471  634278 ubuntu.go:169] provisioning hostname "kindnet-20220602175747-283122"
	I0602 18:15:26.010532  634278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220602175747-283122
	I0602 18:15:26.051915  634278 main.go:134] libmachine: Using SSH client type: native
	I0602 18:15:26.052095  634278 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49734 <nil> <nil>}
	I0602 18:15:26.052115  634278 main.go:134] libmachine: About to run SSH command:
	sudo hostname kindnet-20220602175747-283122 && echo "kindnet-20220602175747-283122" | sudo tee /etc/hostname
	I0602 18:15:26.187930  634278 main.go:134] libmachine: SSH cmd err, output: <nil>: kindnet-20220602175747-283122
	
	I0602 18:15:26.188022  634278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220602175747-283122
	I0602 18:15:26.233159  634278 main.go:134] libmachine: Using SSH client type: native
	I0602 18:15:26.233311  634278 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49734 <nil> <nil>}
	I0602 18:15:26.233330  634278 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-20220602175747-283122' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-20220602175747-283122/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-20220602175747-283122' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0602 18:15:26.361329  634278 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0602 18:15:26.361375  634278 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem
ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube}
	I0602 18:15:26.361404  634278 ubuntu.go:177] setting up certificates
	I0602 18:15:26.361417  634278 provision.go:83] configureAuth start
	I0602 18:15:26.361490  634278 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220602175747-283122
	I0602 18:15:26.399073  634278 provision.go:138] copyHostCerts
	I0602 18:15:26.399137  634278 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem, removing ...
	I0602 18:15:26.399146  634278 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem
	I0602 18:15:26.399208  634278 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.pem (1082 bytes)
	I0602 18:15:26.399370  634278 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem, removing ...
	I0602 18:15:26.399388  634278 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem
	I0602 18:15:26.399420  634278 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cert.pem (1123 bytes)
	I0602 18:15:26.399474  634278 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem, removing ...
	I0602 18:15:26.399483  634278 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem
	I0602 18:15:26.399505  634278 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/key.pem (1679 bytes)
	I0602 18:15:26.399545  634278 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem org=jenkins.kindnet-20220602175747-283122 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube kindnet-20220602175747-283122]
	I0602 18:15:26.583399  634278 provision.go:172] copyRemoteCerts
	I0602 18:15:26.583481  634278 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0602 18:15:26.583527  634278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220602175747-283122
	I0602 18:15:26.619745  634278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49734 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/kindnet-20220602175747-283122/id_rsa Username:docker}
	I0602 18:15:26.709107  634278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0602 18:15:26.728065  634278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0602 18:15:26.747139  634278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0602 18:15:26.766816  634278 provision.go:86] duration metric: configureAuth took 405.376969ms
	I0602 18:15:26.766852  634278 ubuntu.go:193] setting minikube options for container-runtime
	I0602 18:15:26.767046  634278 config.go:178] Loaded profile config "kindnet-20220602175747-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 18:15:26.767098  634278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220602175747-283122
	I0602 18:15:26.804886  634278 main.go:134] libmachine: Using SSH client type: native
	I0602 18:15:26.805117  634278 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49734 <nil> <nil>}
	I0602 18:15:26.805144  634278 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0602 18:15:26.925781  634278 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0602 18:15:26.925824  634278 ubuntu.go:71] root file system type: overlay
	I0602 18:15:26.926050  634278 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0602 18:15:26.926126  634278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220602175747-283122
	I0602 18:15:26.964962  634278 main.go:134] libmachine: Using SSH client type: native
	I0602 18:15:26.965143  634278 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49734 <nil> <nil>}
	I0602 18:15:26.965206  634278 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0602 18:15:27.104225  634278 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0602 18:15:27.104327  634278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220602175747-283122
	I0602 18:15:27.141734  634278 main.go:134] libmachine: Using SSH client type: native
	I0602 18:15:27.141883  634278 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 127.0.0.1 49734 <nil> <nil>}
	I0602 18:15:27.141902  634278 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0602 18:15:27.965327  634278 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-05-12 09:15:28.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-06-02 18:15:27.098459102 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0602 18:15:27.965373  634278 machine.go:91] provisioned docker machine in 1.954920665s
	I0602 18:15:27.965387  634278 client.go:171] LocalClient.Create took 8.990455002s
	I0602 18:15:27.965402  634278 start.go:173] duration metric: libmachine.API.Create for "kindnet-20220602175747-283122" took 8.990528473s
	I0602 18:15:27.965449  634278 start.go:306] post-start starting for "kindnet-20220602175747-283122" (driver="docker")
	I0602 18:15:27.965463  634278 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0602 18:15:27.965531  634278 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0602 18:15:27.965584  634278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220602175747-283122
	I0602 18:15:28.009491  634278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49734 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/kindnet-20220602175747-283122/id_rsa Username:docker}
	I0602 18:15:28.098450  634278 ssh_runner.go:195] Run: cat /etc/os-release
	I0602 18:15:28.101605  634278 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0602 18:15:28.101648  634278 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0602 18:15:28.101661  634278 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0602 18:15:28.101668  634278 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0602 18:15:28.101678  634278 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/addons for local assets ...
	I0602 18:15:28.101744  634278 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files for local assets ...
	I0602 18:15:28.101824  634278 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem -> 2831222.pem in /etc/ssl/certs
	I0602 18:15:28.101914  634278 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0602 18:15:28.109564  634278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem --> /etc/ssl/certs/2831222.pem (1708 bytes)
	I0602 18:15:28.128986  634278 start.go:309] post-start completed in 163.513352ms
	I0602 18:15:28.129405  634278 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220602175747-283122
	I0602 18:15:28.179780  634278 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/config.json ...
	I0602 18:15:28.180134  634278 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 18:15:28.180197  634278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220602175747-283122
	I0602 18:15:28.216511  634278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49734 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/kindnet-20220602175747-283122/id_rsa Username:docker}
	I0602 18:15:28.298762  634278 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0602 18:15:28.304488  634278 start.go:134] duration metric: createHost completed in 9.33229262s
	I0602 18:15:28.304520  634278 start.go:81] releasing machines lock for "kindnet-20220602175747-283122", held for 9.332463602s
	I0602 18:15:28.304612  634278 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220602175747-283122
	I0602 18:15:28.340800  634278 ssh_runner.go:195] Run: systemctl --version
	I0602 18:15:28.340850  634278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220602175747-283122
	I0602 18:15:28.340910  634278 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0602 18:15:28.341080  634278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220602175747-283122
	I0602 18:15:28.381855  634278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49734 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/kindnet-20220602175747-283122/id_rsa Username:docker}
	I0602 18:15:28.381863  634278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49734 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/kindnet-20220602175747-283122/id_rsa Username:docker}
	I0602 18:15:28.496632  634278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0602 18:15:28.507763  634278 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 18:15:28.517865  634278 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0602 18:15:28.517959  634278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0602 18:15:28.528650  634278 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0602 18:15:28.544396  634278 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0602 18:15:28.636926  634278 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0602 18:15:28.718246  634278 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0602 18:15:28.728284  634278 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0602 18:15:28.819989  634278 ssh_runner.go:195] Run: sudo systemctl start docker
	I0602 18:15:28.829949  634278 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 18:15:28.879829  634278 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0602 18:15:28.925146  634278 out.go:204] * Preparing Kubernetes v1.23.6 on Docker 20.10.16 ...
	I0602 18:15:28.925246  634278 cli_runner.go:164] Run: docker network inspect kindnet-20220602175747-283122 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0602 18:15:28.966680  634278 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0602 18:15:28.971418  634278 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 18:15:28.984835  634278 out.go:177]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0602 18:15:28.986961  634278 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime docker
	I0602 18:15:28.987061  634278 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 18:15:29.023967  634278 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0602 18:15:29.024001  634278 docker.go:541] Images already preloaded, skipping extraction
	I0602 18:15:29.024064  634278 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0602 18:15:29.061694  634278 docker.go:610] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.6
	k8s.gcr.io/kube-controller-manager:v1.23.6
	k8s.gcr.io/kube-proxy:v1.23.6
	k8s.gcr.io/kube-scheduler:v1.23.6
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0602 18:15:29.061726  634278 cache_images.go:84] Images are preloaded, skipping loading
	I0602 18:15:29.061783  634278 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0602 18:15:29.159440  634278 cni.go:95] Creating CNI manager for "kindnet"
	I0602 18:15:29.159481  634278 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0602 18:15:29.159502  634278 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-20220602175747-283122 NodeName:kindnet-20220602175747-283122 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/
lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0602 18:15:29.159642  634278 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kindnet-20220602175747-283122"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0602 18:15:29.159716  634278 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kindnet-20220602175747-283122 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:kindnet-20220602175747-283122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I0602 18:15:29.159774  634278 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0602 18:15:29.169158  634278 binaries.go:44] Found k8s binaries, skipping transfer
	I0602 18:15:29.169241  634278 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0602 18:15:29.178009  634278 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (407 bytes)
	I0602 18:15:29.194137  634278 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0602 18:15:29.209983  634278 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2051 bytes)
	I0602 18:15:29.224581  634278 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0602 18:15:29.227999  634278 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0602 18:15:29.238731  634278 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122 for IP: 192.168.58.2
	I0602 18:15:29.238835  634278 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key
	I0602 18:15:29.238874  634278 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key
	I0602 18:15:29.238920  634278 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/client.key
	I0602 18:15:29.238935  634278 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/client.crt with IP's: []
	I0602 18:15:29.336307  634278 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/client.crt ...
	I0602 18:15:29.336351  634278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/client.crt: {Name:mk0d8472e0a15fd1b52eed41ae5e769cf62cb38a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 18:15:29.336607  634278 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/client.key ...
	I0602 18:15:29.336624  634278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/client.key: {Name:mk10b095b943a4e19e4a906b3fdf364cadc030c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 18:15:29.336746  634278 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/apiserver.key.cee25041
	I0602 18:15:29.336764  634278 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0602 18:15:29.760654  634278 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/apiserver.crt.cee25041 ...
	I0602 18:15:29.760737  634278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/apiserver.crt.cee25041: {Name:mkaa266a2ffd6ce7a6d61fdd387b4e199aa1a66a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 18:15:29.760916  634278 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/apiserver.key.cee25041 ...
	I0602 18:15:29.760930  634278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/apiserver.key.cee25041: {Name:mkec50ec4fb92642bde919207d935178c9fc536f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 18:15:29.761031  634278 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/apiserver.crt
	I0602 18:15:29.761115  634278 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/apiserver.key
	I0602 18:15:29.761166  634278 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/proxy-client.key
	I0602 18:15:29.761188  634278 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/proxy-client.crt with IP's: []
	I0602 18:15:30.128879  634278 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/proxy-client.crt ...
	I0602 18:15:30.128916  634278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/proxy-client.crt: {Name:mkf9e719113cd46f24987b3c815c124864e38da3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 18:15:30.129187  634278 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/proxy-client.key ...
	I0602 18:15:30.129207  634278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/proxy-client.key: {Name:mke7010d6fe4af136917a4c04fbe2221dd23ea19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 18:15:30.129391  634278 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/283122.pem (1338 bytes)
	W0602 18:15:30.129430  634278 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/283122_empty.pem, impossibly tiny 0 bytes
	I0602 18:15:30.129443  634278 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca-key.pem (1675 bytes)
	I0602 18:15:30.129468  634278 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/ca.pem (1082 bytes)
	I0602 18:15:30.129492  634278 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/cert.pem (1123 bytes)
	I0602 18:15:30.129515  634278 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/key.pem (1679 bytes)
	I0602 18:15:30.129553  634278 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem (1708 bytes)
	I0602 18:15:30.130153  634278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0602 18:15:30.173074  634278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0602 18:15:30.193953  634278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0602 18:15:30.215036  634278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/kindnet-20220602175747-283122/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0602 18:15:30.235071  634278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0602 18:15:30.255575  634278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0602 18:15:30.276915  634278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0602 18:15:30.296093  634278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0602 18:15:30.315154  634278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/ssl/certs/2831222.pem --> /usr/share/ca-certificates/2831222.pem (1708 bytes)
	I0602 18:15:30.335070  634278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0602 18:15:30.355711  634278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/certs/283122.pem --> /usr/share/ca-certificates/283122.pem (1338 bytes)
	I0602 18:15:30.377344  634278 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0602 18:15:30.414263  634278 ssh_runner.go:195] Run: openssl version
	I0602 18:15:30.420112  634278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2831222.pem && ln -fs /usr/share/ca-certificates/2831222.pem /etc/ssl/certs/2831222.pem"
	I0602 18:15:30.430221  634278 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2831222.pem
	I0602 18:15:30.433875  634278 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  2 17:19 /usr/share/ca-certificates/2831222.pem
	I0602 18:15:30.433944  634278 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2831222.pem
	I0602 18:15:30.440942  634278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2831222.pem /etc/ssl/certs/3ec20f2e.0"
	I0602 18:15:30.450452  634278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0602 18:15:30.460038  634278 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0602 18:15:30.464272  634278 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  2 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0602 18:15:30.464335  634278 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0602 18:15:30.470843  634278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0602 18:15:30.481353  634278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/283122.pem && ln -fs /usr/share/ca-certificates/283122.pem /etc/ssl/certs/283122.pem"
	I0602 18:15:30.491284  634278 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/283122.pem
	I0602 18:15:30.495026  634278 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  2 17:19 /usr/share/ca-certificates/283122.pem
	I0602 18:15:30.495097  634278 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/283122.pem
	I0602 18:15:30.500683  634278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/283122.pem /etc/ssl/certs/51391683.0"
	I0602 18:15:30.510019  634278 kubeadm.go:395] StartCluster: {Name:kindnet-20220602175747-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:kindnet-20220602175747-283122 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 18:15:30.510201  634278 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0602 18:15:30.545122  634278 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0602 18:15:30.553077  634278 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0602 18:15:30.560566  634278 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0602 18:15:30.560626  634278 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0602 18:15:30.567819  634278 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0602 18:15:30.567877  634278 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0602 18:15:31.144193  634278 out.go:204]   - Generating certificates and keys ...
	I0602 18:15:34.059699  634278 out.go:204]   - Booting up control plane ...
	I0602 18:15:41.606417  634278 out.go:204]   - Configuring RBAC rules ...
	I0602 18:15:42.043353  634278 cni.go:95] Creating CNI manager for "kindnet"
	I0602 18:15:42.045241  634278 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0602 18:15:42.046649  634278 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0602 18:15:42.051173  634278 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6/kubectl ...
	I0602 18:15:42.051198  634278 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0602 18:15:42.066849  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0602 18:15:43.254689  634278 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.6/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.18779429s)
	I0602 18:15:43.254749  634278 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0602 18:15:43.254834  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:43.254834  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl label nodes minikube.k8s.io/version=v1.26.0-beta.1 minikube.k8s.io/commit=408dc4036f5a6d8b1313a2031b5dcb646a720fae minikube.k8s.io/name=kindnet-20220602175747-283122 minikube.k8s.io/updated_at=2022_06_02T18_15_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:43.262267  634278 ops.go:34] apiserver oom_adj: -16
	I0602 18:15:43.354780  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:43.919586  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:44.419134  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:44.919771  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:45.419469  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:45.919352  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:46.419793  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:46.919068  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:47.419215  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:47.919628  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:48.419168  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:48.918966  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:49.419085  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:49.919347  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:50.418982  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:50.919260  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:51.419480  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:51.918921  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:52.419342  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:52.919839  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:53.419354  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:53.919088  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:54.419793  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:54.919776  634278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0602 18:15:54.980869  634278 kubeadm.go:1045] duration metric: took 11.726095842s to wait for elevateKubeSystemPrivileges.
	I0602 18:15:54.980922  634278 kubeadm.go:397] StartCluster complete in 24.470919789s
	I0602 18:15:54.980943  634278 settings.go:142] acquiring lock: {Name:mkca69c8f6bc293fef8b552d09d771e1f2253f12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 18:15:54.981105  634278 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 18:15:54.982562  634278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig: {Name:mk4aad2ea1df51829b8bb57d56bd4d8e58dc96e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 18:15:55.534956  634278 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kindnet-20220602175747-283122" rescaled to 1
	I0602 18:15:55.535028  634278 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0602 18:15:55.537903  634278 out.go:177] * Verifying Kubernetes components...
	I0602 18:15:55.535080  634278 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0602 18:15:55.535103  634278 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0602 18:15:55.535362  634278 config.go:178] Loaded profile config "kindnet-20220602175747-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 18:15:55.539506  634278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 18:15:55.539567  634278 addons.go:65] Setting storage-provisioner=true in profile "kindnet-20220602175747-283122"
	I0602 18:15:55.539574  634278 addons.go:65] Setting default-storageclass=true in profile "kindnet-20220602175747-283122"
	I0602 18:15:55.539594  634278 addons.go:153] Setting addon storage-provisioner=true in "kindnet-20220602175747-283122"
	I0602 18:15:55.539598  634278 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-20220602175747-283122"
	W0602 18:15:55.539603  634278 addons.go:165] addon storage-provisioner should already be in state true
	I0602 18:15:55.539667  634278 host.go:66] Checking if "kindnet-20220602175747-283122" exists ...
	I0602 18:15:55.540027  634278 cli_runner.go:164] Run: docker container inspect kindnet-20220602175747-283122 --format={{.State.Status}}
	I0602 18:15:55.540176  634278 cli_runner.go:164] Run: docker container inspect kindnet-20220602175747-283122 --format={{.State.Status}}
	I0602 18:15:55.556053  634278 node_ready.go:35] waiting up to 5m0s for node "kindnet-20220602175747-283122" to be "Ready" ...
	I0602 18:15:55.591548  634278 addons.go:153] Setting addon default-storageclass=true in "kindnet-20220602175747-283122"
	W0602 18:15:55.591577  634278 addons.go:165] addon default-storageclass should already be in state true
	I0602 18:15:55.591609  634278 host.go:66] Checking if "kindnet-20220602175747-283122" exists ...
	I0602 18:15:55.593980  634278 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0602 18:15:55.592099  634278 cli_runner.go:164] Run: docker container inspect kindnet-20220602175747-283122 --format={{.State.Status}}
	I0602 18:15:55.595548  634278 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 18:15:55.595569  634278 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0602 18:15:55.595627  634278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220602175747-283122
	I0602 18:15:55.639054  634278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49734 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/kindnet-20220602175747-283122/id_rsa Username:docker}
	I0602 18:15:55.641441  634278 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0602 18:15:55.641471  634278 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0602 18:15:55.641529  634278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220602175747-283122
	I0602 18:15:55.644413  634278 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0602 18:15:55.688026  634278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49734 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/kindnet-20220602175747-283122/id_rsa Username:docker}
	I0602 18:15:55.754379  634278 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0602 18:15:55.851511  634278 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0602 18:15:55.970150  634278 start.go:806] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0602 18:15:56.168147  634278 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0602 18:15:56.169639  634278 addons.go:417] enableAddons completed in 634.534856ms
	I0602 18:15:57.564669  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:00.064275  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:02.064661  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:04.564993  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:07.064266  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:09.064333  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:11.064370  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:13.564536  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:16.064638  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:18.065060  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:20.564450  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:23.182999  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:25.564956  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:28.064982  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:30.065288  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:32.565491  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:35.064751  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:37.564631  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:40.064732  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:42.564929  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:45.064635  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:47.564042  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:49.564310  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:51.564547  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:53.566005  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:56.064494  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:16:58.565105  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:17:01.064702  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:17:03.567283  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:17:06.064396  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:17:08.564842  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:17:11.064769  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:17:13.564757  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:17:16.064452  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:17:18.564527  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:17:20.564893  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:17:23.065106  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:17:25.565151  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:17:28.065003  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:17:30.564761  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:17:32.565119  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:17:35.064708  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:17:37.565230  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:17:40.064621  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:17:42.564684  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:17:45.065291  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:17:47.065504  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:17:49.565104  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:17:52.065180  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:17:54.563953  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:17:56.564155  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:17:58.565405  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:18:01.070803  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:18:03.565728  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:18:06.065447  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:18:08.565003  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:18:11.064601  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:18:13.065677  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:18:15.066085  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:18:17.564676  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:18:19.565517  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:18:22.064665  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:18:24.564124  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:18:27.064998  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:18:29.565548  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:18:32.064889  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:18:34.564752  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:18:37.064260  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:18:39.064767  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:18:41.564848  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:18:44.065304  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:18:46.565040  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:18:48.565480  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:18:51.065089  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:18:53.065128  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:18:55.564991  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:18:57.565159  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:00.065212  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:02.564357  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:05.064613  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:07.565860  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:10.064501  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:12.564793  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:14.565161  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:17.064251  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:19.064635  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:21.564013  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:23.564877  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:25.565460  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:28.064239  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:30.564194  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:32.565076  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:35.064454  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:37.564239  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:39.565168  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:42.064638  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:44.564234  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:47.064247  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:49.064491  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:51.564357  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:53.564510  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:55.564972  634278 node_ready.go:58] node "kindnet-20220602175747-283122" has status "Ready":"False"
	I0602 18:19:55.567561  634278 node_ready.go:38] duration metric: took 4m0.011464608s waiting for node "kindnet-20220602175747-283122" to be "Ready" ...
	I0602 18:19:55.570361  634278 out.go:177] 
	W0602 18:19:55.572421  634278 out.go:239] X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0602 18:19:55.572446  634278 out.go:239] * 
	* 
	W0602 18:19:55.573202  634278 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0602 18:19:55.575297  634278 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:103: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (277.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (353.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default
E0602 18:15:42.152224  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/cilium-20220602175747-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.145907611s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0602 18:15:52.068308  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602175344-283122/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context false-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.149600084s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0602 18:16:07.943029  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context false-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default
E0602 18:16:11.421178  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602180121-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.138807122s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default
E0602 18:16:39.106543  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602180121-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.147680863s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context false-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129604271s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0602 18:17:04.073155  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/cilium-20220602175747-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.151436142s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.144400526s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.208602283s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.143520915s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.145089697s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0602 18:19:20.229170  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/cilium-20220602175747-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.154165961s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context false-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.138932451s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/false/DNS (353.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (296.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.158700321s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.159386505s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.129109657s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0602 18:19:06.273527  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602175851-283122/client.crt: no such file or directory
E0602 18:19:06.278881  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602175851-283122/client.crt: no such file or directory
E0602 18:19:06.289217  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602175851-283122/client.crt: no such file or directory
E0602 18:19:06.309494  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602175851-283122/client.crt: no such file or directory
E0602 18:19:06.349830  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602175851-283122/client.crt: no such file or directory
E0602 18:19:06.430197  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602175851-283122/client.crt: no such file or directory
E0602 18:19:06.590791  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602175851-283122/client.crt: no such file or directory
E0602 18:19:06.911362  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602175851-283122/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.148440894s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default
E0602 18:19:26.754725  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602175851-283122/client.crt: no such file or directory
E0602 18:19:29.022735  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602175344-283122/client.crt: no such file or directory
E0602 18:19:33.160525  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137991717s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default
E0602 18:19:47.914033  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/cilium-20220602175747-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.150263122s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0602 18:20:06.658639  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602175916-283122/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130676282s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0602 18:20:41.825437  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602175747-283122/client.crt: no such file or directory
E0602 18:20:41.830791  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602175747-283122/client.crt: no such file or directory
E0602 18:20:41.841127  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602175747-283122/client.crt: no such file or directory
E0602 18:20:41.861450  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602175747-283122/client.crt: no such file or directory
E0602 18:20:41.901775  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602175747-283122/client.crt: no such file or directory
E0602 18:20:41.982099  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602175747-283122/client.crt: no such file or directory
E0602 18:20:42.142483  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602175747-283122/client.crt: no such file or directory
E0602 18:20:42.463067  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602175747-283122/client.crt: no such file or directory
E0602 18:20:43.104056  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602175747-283122/client.crt: no such file or directory
E0602 18:20:44.384408  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602175747-283122/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default
E0602 18:20:46.945109  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602175747-283122/client.crt: no such file or directory
E0602 18:20:52.066075  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602175747-283122/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.142723016s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0602 18:21:02.306901  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602175747-283122/client.crt: no such file or directory
E0602 18:21:07.942526  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
E0602 18:21:11.420743  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/default-k8s-different-port-20220602180121-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default
E0602 18:21:22.787407  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602175747-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140691788s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default
E0602 18:21:50.117318  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602175851-283122/client.crt: no such file or directory
E0602 18:21:58.876547  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602175746-283122/client.crt: no such file or directory
E0602 18:22:03.747864  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/calico-20220602175747-283122/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.138299841s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0602 18:22:08.265450  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602175746-283122/client.crt: no such file or directory
E0602 18:22:08.270762  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602175746-283122/client.crt: no such file or directory
E0602 18:22:08.281102  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602175746-283122/client.crt: no such file or directory
E0602 18:22:08.301398  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602175746-283122/client.crt: no such file or directory
E0602 18:22:08.341697  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602175746-283122/client.crt: no such file or directory
E0602 18:22:08.422137  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602175746-283122/client.crt: no such file or directory
E0602 18:22:08.582559  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602175746-283122/client.crt: no such file or directory
E0602 18:22:08.903272  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602175746-283122/client.crt: no such file or directory
E0602 18:22:09.544240  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602175746-283122/client.crt: no such file or directory
E0602 18:22:10.824784  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602175746-283122/client.crt: no such file or directory
E0602 18:22:13.385540  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602175746-283122/client.crt: no such file or directory
E0602 18:22:18.506030  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602175746-283122/client.crt: no such file or directory
E0602 18:22:19.357318  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602175746-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default
E0602 18:23:00.318153  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/auto-20220602175746-283122/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128338514s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:180: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (296.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (277.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (142.658617ms)

                                                
                                                
-- stdout --
	Server:		10.96.0.10
	Address:	10.96.0.10#53
	
	** server can't find kubernetes.default.default.svc.cluster.local: SERVFAIL
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (132.808548ms)

                                                
                                                
-- stdout --
	Server:		10.96.0.10
	Address:	10.96.0.10#53
	
	** server can't find kubernetes.default.default.svc.cluster.local: SERVFAIL
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (140.637033ms)

                                                
                                                
-- stdout --
	Server:		10.96.0.10
	Address:	10.96.0.10#53
	
	** server can't find kubernetes.default.default.svc.cluster.local: SERVFAIL
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (144.048594ms)

                                                
                                                
-- stdout --
	Server:		10.96.0.10
	Address:	10.96.0.10#53
	
	** server can't find kubernetes.default.default.svc.cluster.local: SERVFAIL
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (135.399236ms)

                                                
                                                
-- stdout --
	Server:		10.96.0.10
	Address:	10.96.0.10#53
	
	** server can't find kubernetes.default.default.svc.cluster.local: SERVFAIL
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0602 18:18:55.532183  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (143.328065ms)

                                                
                                                
-- stdout --
	Server:		10.96.0.10
	Address:	10.96.0.10#53
	
	** server can't find kubernetes.default.default.svc.cluster.local: SERVFAIL
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default
E0602 18:19:07.551841  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602175851-283122/client.crt: no such file or directory
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (169.002086ms)

                                                
                                                
-- stdout --
	Server:		10.96.0.10
	Address:	10.96.0.10#53
	
	** server can't find kubernetes.default.default.svc.cluster.local: SERVFAIL
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0602 18:19:08.832517  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602175851-283122/client.crt: no such file or directory
E0602 18:19:11.392999  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602175851-283122/client.crt: no such file or directory
E0602 18:19:16.513847  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602175851-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (138.078525ms)

                                                
                                                
-- stdout --
	Server:		10.96.0.10
	Address:	10.96.0.10#53
	
	** server can't find kubernetes.default.default.svc.cluster.local: SERVFAIL
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (134.034865ms)

                                                
                                                
-- stdout --
	Server:		10.96.0.10
	Address:	10.96.0.10#53
	
	** server can't find kubernetes.default.default.svc.cluster.local: SERVFAIL
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0602 18:19:47.235758  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602175851-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (136.480073ms)

                                                
                                                
-- stdout --
	Server:		10.96.0.10
	Address:	10.96.0.10#53
	
	** server can't find kubernetes.default.default.svc.cluster.local: SERVFAIL
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0602 18:20:28.196785  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/old-k8s-version-20220602175851-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (132.707202ms)

                                                
                                                
-- stdout --
	Server:		10.96.0.10
	Address:	10.96.0.10#53
	
	** server can't find kubernetes.default.default.svc.cluster.local: SERVFAIL
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (129.409275ms)

                                                
                                                
-- stdout --
	Server:		10.96.0.10
	Address:	10.96.0.10#53
	
	** server can't find kubernetes.default.default.svc.cluster.local: SERVFAIL
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0602 18:22:28.747268  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602175746-283122/client.crt: no such file or directory
E0602 18:22:49.228284  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/enable-default-cni-20220602175746-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:169: (dbg) Non-zero exit: kubectl --context kubenet-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (134.130255ms)

                                                
                                                
-- stdout --
	Server:		10.96.0.10
	Address:	10.96.0.10#53
	
	** server can't find kubernetes.default.default.svc.cluster.local: SERVFAIL
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: failed to do nslookup on kubernetes.default: exit status 1
--- FAIL: TestNetworkPlugins/group/kubenet/DNS (277.70s)

                                                
                                    

Test pass (242/278)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 7.57
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.23.6/json-events 3.7
11 TestDownloadOnly/v1.23.6/preload-exists 0
15 TestDownloadOnly/v1.23.6/LogsDuration 0.09
16 TestDownloadOnly/DeleteAll 0.36
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.23
18 TestDownloadOnlyKic 2.98
19 TestBinaryMirror 0.94
20 TestOffline 319.42
22 TestAddons/Setup 93.14
25 TestAddons/parallel/Ingress 20.88
26 TestAddons/parallel/MetricsServer 5.75
27 TestAddons/parallel/HelmTiller 10.84
29 TestAddons/parallel/CSI 44.92
31 TestAddons/serial/GCPAuth 38.1
32 TestAddons/StoppedEnableDisable 11.16
33 TestCertOptions 32.79
34 TestCertExpiration 214.16
35 TestDockerFlags 31.54
36 TestForceSystemdFlag 68.08
37 TestForceSystemdEnv 34.28
38 TestKVMDriverInstallOrUpdate 1.5
42 TestErrorSpam/setup 26.28
43 TestErrorSpam/start 1.06
44 TestErrorSpam/status 1.22
45 TestErrorSpam/pause 1.56
46 TestErrorSpam/unpause 1.69
47 TestErrorSpam/stop 11.02
50 TestFunctional/serial/CopySyncFile 0
51 TestFunctional/serial/StartWithProxy 39.26
52 TestFunctional/serial/AuditLog 0
53 TestFunctional/serial/SoftStart 254.12
54 TestFunctional/serial/KubeContext 0.04
55 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/add_remote 2.93
59 TestFunctional/serial/CacheCmd/cache/add_local 0.89
60 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
61 TestFunctional/serial/CacheCmd/cache/list 0.07
62 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.39
63 TestFunctional/serial/CacheCmd/cache/cache_reload 1.95
64 TestFunctional/serial/CacheCmd/cache/delete 0.14
65 TestFunctional/serial/MinikubeKubectlCmd 0.26
66 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
67 TestFunctional/serial/ExtraConfig 19.77
69 TestFunctional/serial/LogsCmd 1.4
70 TestFunctional/serial/LogsFileCmd 1.44
72 TestFunctional/parallel/ConfigCmd 0.55
74 TestFunctional/parallel/DryRun 0.68
75 TestFunctional/parallel/InternationalLanguage 0.3
76 TestFunctional/parallel/StatusCmd 1.28
79 TestFunctional/parallel/ServiceCmd 15.2
80 TestFunctional/parallel/ServiceCmdConnect 12.24
81 TestFunctional/parallel/AddonsCmd 0.24
84 TestFunctional/parallel/SSHCmd 0.91
85 TestFunctional/parallel/CpCmd 1.98
86 TestFunctional/parallel/MySQL 23.61
87 TestFunctional/parallel/FileSync 0.43
88 TestFunctional/parallel/CertSync 2.69
92 TestFunctional/parallel/NodeLabels 0.07
94 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
96 TestFunctional/parallel/ProfileCmd/profile_not_create 0.6
97 TestFunctional/parallel/Version/short 0.16
98 TestFunctional/parallel/Version/components 0.9
99 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
100 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
101 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
102 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
103 TestFunctional/parallel/ImageCommands/ImageBuild 2.13
104 TestFunctional/parallel/ImageCommands/Setup 0.98
105 TestFunctional/parallel/ProfileCmd/profile_list 0.56
106 TestFunctional/parallel/DockerEnv/bash 1.62
107 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
108 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
109 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
110 TestFunctional/parallel/ProfileCmd/profile_json_output 0.61
111 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 6.39
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 19.23
116 TestFunctional/parallel/MountCmd/any-port 16.58
117 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.9
118 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.59
119 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.79
120 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
121 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.53
122 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.73
123 TestFunctional/parallel/MountCmd/specific-port 2.55
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
130 TestFunctional/delete_addon-resizer_images 0.1
131 TestFunctional/delete_my-image_image 0.03
132 TestFunctional/delete_minikube_cached_images 0.03
135 TestIngressAddonLegacy/StartLegacyK8sCluster 55.69
137 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.72
138 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.38
139 TestIngressAddonLegacy/serial/ValidateIngressAddons 40.03
142 TestJSONOutput/start/Command 41.11
143 TestJSONOutput/start/Audit 0
145 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
146 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
148 TestJSONOutput/pause/Command 0.69
149 TestJSONOutput/pause/Audit 0
151 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
154 TestJSONOutput/unpause/Command 0.6
155 TestJSONOutput/unpause/Audit 0
157 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/stop/Command 10.95
161 TestJSONOutput/stop/Audit 0
163 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
165 TestErrorJSONOutput 0.32
167 TestKicCustomNetwork/create_custom_network 27.29
168 TestKicCustomNetwork/use_default_bridge_network 27.72
169 TestKicExistingNetwork 27.77
170 TestKicCustomSubnet 28.03
171 TestMainNoArgs 0.07
172 TestMinikubeProfile 56.29
175 TestMountStart/serial/StartWithMountFirst 5.93
176 TestMountStart/serial/VerifyMountFirst 0.36
177 TestMountStart/serial/StartWithMountSecond 5.69
178 TestMountStart/serial/VerifyMountSecond 0.35
179 TestMountStart/serial/DeleteFirst 1.76
180 TestMountStart/serial/VerifyMountPostDelete 0.36
181 TestMountStart/serial/Stop 1.28
182 TestMountStart/serial/RestartStopped 6.73
183 TestMountStart/serial/VerifyMountPostStop 0.35
186 TestMultiNode/serial/FreshStart2Nodes 72.13
189 TestMultiNode/serial/AddNode 27
190 TestMultiNode/serial/ProfileList 0.39
191 TestMultiNode/serial/CopyFile 12.8
192 TestMultiNode/serial/StopNode 2.57
193 TestMultiNode/serial/StartAfterStop 24.83
194 TestMultiNode/serial/RestartKeepsNodes 102.76
195 TestMultiNode/serial/DeleteNode 5.39
196 TestMultiNode/serial/StopMultiNode 21.84
197 TestMultiNode/serial/RestartMultiNode 59.4
198 TestMultiNode/serial/ValidateNameConflict 28.37
203 TestPreload 111.64
205 TestScheduledStopUnix 100.32
206 TestSkaffold 56.87
208 TestInsufficientStorage 13.32
209 TestRunningBinaryUpgrade 66.99
211 TestKubernetesUpgrade 103.57
212 TestMissingContainerUpgrade 103.37
214 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
215 TestNoKubernetes/serial/StartWithK8s 47.43
216 TestNoKubernetes/serial/StartWithStopK8s 15.77
217 TestNoKubernetes/serial/Start 6.18
218 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
219 TestNoKubernetes/serial/ProfileList 1.3
220 TestNoKubernetes/serial/Stop 4.24
221 TestNoKubernetes/serial/StartNoArgs 6.05
222 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.41
223 TestStoppedBinaryUpgrade/Setup 0.51
224 TestStoppedBinaryUpgrade/Upgrade 70.19
225 TestStoppedBinaryUpgrade/MinikubeLogs 1.72
234 TestPause/serial/Start 55.21
246 TestPause/serial/SecondStartNoReconfiguration 5.12
247 TestPause/serial/Pause 0.65
248 TestPause/serial/VerifyStatus 0.44
249 TestPause/serial/Unpause 0.67
250 TestPause/serial/PauseAgain 0.89
251 TestPause/serial/DeletePaused 2.67
252 TestPause/serial/VerifyDeletedResources 2.48
254 TestStartStop/group/old-k8s-version/serial/FirstStart 314.31
256 TestStartStop/group/no-preload/serial/FirstStart 49.83
257 TestStartStop/group/no-preload/serial/DeployApp 8.31
259 TestStartStop/group/embed-certs/serial/FirstStart 289.19
260 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.69
261 TestStartStop/group/no-preload/serial/Stop 11.01
262 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
263 TestStartStop/group/no-preload/serial/SecondStart 339.41
265 TestStartStop/group/default-k8s-different-port/serial/FirstStart 289.84
266 TestStartStop/group/old-k8s-version/serial/DeployApp 7.34
267 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.64
268 TestStartStop/group/old-k8s-version/serial/Stop 10.93
269 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
270 TestStartStop/group/old-k8s-version/serial/SecondStart 599.38
271 TestStartStop/group/embed-certs/serial/DeployApp 8.29
272 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.63
273 TestStartStop/group/embed-certs/serial/Stop 10.88
274 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
275 TestStartStop/group/embed-certs/serial/SecondStart 577.68
276 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 14.01
277 TestStartStop/group/default-k8s-different-port/serial/DeployApp 8.32
278 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.65
279 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
280 TestStartStop/group/default-k8s-different-port/serial/Stop 11.05
281 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.4
282 TestStartStop/group/no-preload/serial/Pause 3.25
283 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.23
286 TestStartStop/group/newest-cni/serial/FirstStart 41.97
291 TestNetworkPlugins/group/auto/Start 289.18
292 TestStartStop/group/newest-cni/serial/DeployApp 0
293 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.88
294 TestStartStop/group/newest-cni/serial/Stop 11.03
295 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
296 TestStartStop/group/newest-cni/serial/SecondStart 20.72
297 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
298 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
299 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.45
300 TestStartStop/group/newest-cni/serial/Pause 3.27
301 TestNetworkPlugins/group/cilium/Start 86.22
302 TestNetworkPlugins/group/cilium/ControllerPod 5.02
303 TestNetworkPlugins/group/cilium/KubeletFlags 0.4
304 TestNetworkPlugins/group/cilium/NetCatPod 12.02
305 TestNetworkPlugins/group/cilium/DNS 0.16
306 TestNetworkPlugins/group/cilium/Localhost 0.14
307 TestNetworkPlugins/group/cilium/HairPin 0.15
308 TestNetworkPlugins/group/calico/Start 60.79
309 TestNetworkPlugins/group/calico/ControllerPod 5.02
310 TestNetworkPlugins/group/calico/KubeletFlags 0.4
311 TestNetworkPlugins/group/calico/NetCatPod 10.23
313 TestNetworkPlugins/group/auto/KubeletFlags 0.39
314 TestNetworkPlugins/group/auto/NetCatPod 11.39
316 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
317 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
318 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.41
319 TestStartStop/group/old-k8s-version/serial/Pause 3.25
320 TestNetworkPlugins/group/false/Start 44.09
321 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
322 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
323 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.4
324 TestStartStop/group/embed-certs/serial/Pause 3.39
326 TestNetworkPlugins/group/false/KubeletFlags 0.48
327 TestNetworkPlugins/group/false/NetCatPod 10.25
329 TestNetworkPlugins/group/enable-default-cni/Start 50.27
330 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
331 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.23
332 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
333 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
334 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
335 TestNetworkPlugins/group/bridge/Start 43.16
336 TestNetworkPlugins/group/kubenet/Start 40
337 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
338 TestNetworkPlugins/group/bridge/NetCatPod 10.23
340 TestNetworkPlugins/group/kubenet/KubeletFlags 0.39
341 TestNetworkPlugins/group/kubenet/NetCatPod 10.19
x
+
TestDownloadOnly/v1.16.0/json-events (7.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220602171206-283122 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220602171206-283122 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (7.569266364s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (7.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220602171206-283122
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220602171206-283122: exit status 85 (88.385732ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 17:12:06
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 17:12:06.258936  283135 out.go:296] Setting OutFile to fd 1 ...
	I0602 17:12:06.259061  283135 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:12:06.259070  283135 out.go:309] Setting ErrFile to fd 2...
	I0602 17:12:06.259074  283135 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:12:06.259193  283135 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	W0602 17:12:06.259316  283135 root.go:300] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/config/config.json: no such file or directory
	I0602 17:12:06.259531  283135 out.go:303] Setting JSON to true
	I0602 17:12:06.260496  283135 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6880,"bootTime":1654183047,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0602 17:12:06.260589  283135 start.go:125] virtualization: kvm guest
	I0602 17:12:06.263871  283135 out.go:97] [download-only-20220602171206-283122] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0602 17:12:06.264018  283135 notify.go:193] Checking for updates...
	W0602 17:12:06.264040  283135 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball: no such file or directory
	I0602 17:12:06.265947  283135 out.go:169] MINIKUBE_LOCATION=14269
	I0602 17:12:06.269334  283135 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 17:12:06.271161  283135 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 17:12:06.272927  283135 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 17:12:06.274561  283135 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0602 17:12:06.279189  283135 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0602 17:12:06.279496  283135 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 17:12:06.317326  283135 docker.go:137] docker version: linux-20.10.16
	I0602 17:12:06.317427  283135 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 17:12:06.424110  283135 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:71 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:34 SystemTime:2022-06-02 17:12:06.345253855 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 17:12:06.424216  283135 docker.go:254] overlay module found
	I0602 17:12:06.426760  283135 out.go:97] Using the docker driver based on user configuration
	I0602 17:12:06.426785  283135 start.go:284] selected driver: docker
	I0602 17:12:06.426792  283135 start.go:806] validating driver "docker" against <nil>
	I0602 17:12:06.426988  283135 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 17:12:06.533502  283135 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:71 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:34 SystemTime:2022-06-02 17:12:06.455711174 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 17:12:06.533690  283135 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0602 17:12:06.534151  283135 start_flags.go:373] Using suggested 8000MB memory alloc based on sys=32103MB, container=32103MB
	I0602 17:12:06.534255  283135 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0602 17:12:06.536770  283135 out.go:169] Using Docker driver with the root privilege
	I0602 17:12:06.538562  283135 cni.go:95] Creating CNI manager for ""
	I0602 17:12:06.538589  283135 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0602 17:12:06.538601  283135 start_flags.go:306] config:
	{Name:download-only-20220602171206-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220602171206-283122 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 17:12:06.540632  283135 out.go:97] Starting control plane node download-only-20220602171206-283122 in cluster download-only-20220602171206-283122
	I0602 17:12:06.540705  283135 cache.go:120] Beginning downloading kic base image for docker with docker
	I0602 17:12:06.542539  283135 out.go:97] Pulling base image ...
	I0602 17:12:06.542578  283135 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0602 17:12:06.542700  283135 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon
	I0602 17:12:06.580397  283135 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0602 17:12:06.580429  283135 cache.go:57] Caching tarball of preloaded images
	I0602 17:12:06.580797  283135 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0602 17:12:06.583282  283135 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0602 17:12:06.583304  283135 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0602 17:12:06.590327  283135 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local docker daemon, skipping pull
	I0602 17:12:06.590356  283135 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0602 17:12:06.590585  283135 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 in local cache directory
	I0602 17:12:06.590677  283135 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 to local cache
	I0602 17:12:06.633597  283135 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0602 17:12:10.373147  283135 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0602 17:12:10.373250  283135 preload.go:256] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0602 17:12:11.071092  283135 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0602 17:12:11.071489  283135 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/download-only-20220602171206-283122/config.json ...
	I0602 17:12:11.071531  283135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/download-only-20220602171206-283122/config.json: {Name:mk094744f537f0125006a35e489cff8dbd7ff6b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0602 17:12:11.071726  283135 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0602 17:12:11.071948  283135 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220602171206-283122"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/json-events (3.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220602171206-283122 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220602171206-283122 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=docker --driver=docker  --container-runtime=docker: (3.700588957s)
--- PASS: TestDownloadOnly/v1.23.6/json-events (3.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/preload-exists
--- PASS: TestDownloadOnly/v1.23.6/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220602171206-283122
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220602171206-283122: exit status 85 (85.899145ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/02 17:12:13
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0602 17:12:13.916790  283305 out.go:296] Setting OutFile to fd 1 ...
	I0602 17:12:13.916932  283305 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:12:13.916944  283305 out.go:309] Setting ErrFile to fd 2...
	I0602 17:12:13.916949  283305 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:12:13.917105  283305 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	W0602 17:12:13.917289  283305 root.go:300] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/config/config.json: no such file or directory
	I0602 17:12:13.917447  283305 out.go:303] Setting JSON to true
	I0602 17:12:13.918395  283305 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6887,"bootTime":1654183047,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0602 17:12:13.918483  283305 start.go:125] virtualization: kvm guest
	I0602 17:12:13.921331  283305 out.go:97] [download-only-20220602171206-283122] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0602 17:12:13.923467  283305 out.go:169] MINIKUBE_LOCATION=14269
	I0602 17:12:13.921615  283305 notify.go:193] Checking for updates...
	I0602 17:12:13.926922  283305 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 17:12:13.928627  283305 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 17:12:13.930447  283305 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 17:12:13.932160  283305 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220602171206-283122"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.6/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.36s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20220602171206-283122
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.98s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20220602171218-283122 --force --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:228: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20220602171218-283122 --force --alsologtostderr --driver=docker  --container-runtime=docker: (1.935378986s)
helpers_test.go:175: Cleaning up "download-docker-20220602171218-283122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20220602171218-283122
--- PASS: TestDownloadOnlyKic (2.98s)

                                                
                                    
x
+
TestBinaryMirror (0.94s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-20220602171221-283122 --alsologtostderr --binary-mirror http://127.0.0.1:45023 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-20220602171221-283122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-20220602171221-283122
--- PASS: TestBinaryMirror (0.94s)

                                                
                                    
x
+
TestOffline (319.42s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-20220602175454-283122 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-20220602175454-283122 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (5m16.873261551s)
helpers_test.go:175: Cleaning up "offline-docker-20220602175454-283122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-20220602175454-283122
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-20220602175454-283122: (2.5480615s)
--- PASS: TestOffline (319.42s)

                                                
                                    
x
+
TestAddons/Setup (93.14s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:75: (dbg) Run:  out/minikube-linux-amd64 start -p addons-20220602171222-283122 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:75: (dbg) Done: out/minikube-linux-amd64 start -p addons-20220602171222-283122 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m33.144087792s)
--- PASS: TestAddons/Setup (93.14s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:162: (dbg) Run:  kubectl --context addons-20220602171222-283122 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:182: (dbg) Run:  kubectl --context addons-20220602171222-283122 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:182: (dbg) Done: kubectl --context addons-20220602171222-283122 replace --force -f testdata/nginx-ingress-v1.yaml: (1.037001087s)
addons_test.go:195: (dbg) Run:  kubectl --context addons-20220602171222-283122 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [bd92b0fe-71dd-4f0e-a459-e77abee4a792] Pending
helpers_test.go:342: "nginx" [bd92b0fe-71dd-4f0e-a459-e77abee4a792] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [bd92b0fe-71dd-4f0e-a459-e77abee4a792] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.008252697s
addons_test.go:212: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220602171222-283122 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:236: (dbg) Run:  kubectl --context addons-20220602171222-283122 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220602171222-283122 ip
addons_test.go:247: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220602171222-283122 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220602171222-283122 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p addons-20220602171222-283122 addons disable ingress --alsologtostderr -v=1: (7.683964207s)
--- PASS: TestAddons/parallel/Ingress (20.88s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:357: metrics-server stabilized in 10.021729ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-bd6f4dd56-pffxq" [cc211685-0c70-4102-ab95-dc5b915d0d22] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009607818s

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220602171222-283122 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220602171222-283122 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.75s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.84s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:406: tiller-deploy stabilized in 10.104043ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:342: "tiller-deploy-6d67d5465d-kj6rb" [5dc0e137-6b88-4169-b52e-81cd98c2f8f7] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009719009s

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Run:  kubectl --context addons-20220602171222-283122 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Done: kubectl --context addons-20220602171222-283122 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.289850446s)
addons_test.go:440: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220602171222-283122 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.84s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.92s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:511: csi-hostpath-driver pods stabilized in 13.634111ms
addons_test.go:514: (dbg) Run:  kubectl --context addons-20220602171222-283122 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:519: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220602171222-283122 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:524: (dbg) Run:  kubectl --context addons-20220602171222-283122 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:529: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [b07a9ae8-7848-484a-8dde-0cc47fe7068e] Pending
helpers_test.go:342: "task-pv-pod" [b07a9ae8-7848-484a-8dde-0cc47fe7068e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [b07a9ae8-7848-484a-8dde-0cc47fe7068e] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:529: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 21.006965431s
addons_test.go:534: (dbg) Run:  kubectl --context addons-20220602171222-283122 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:539: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220602171222-283122 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220602171222-283122 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:544: (dbg) Run:  kubectl --context addons-20220602171222-283122 delete pod task-pv-pod
addons_test.go:550: (dbg) Run:  kubectl --context addons-20220602171222-283122 delete pvc hpvc
addons_test.go:556: (dbg) Run:  kubectl --context addons-20220602171222-283122 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:561: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220602171222-283122 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:566: (dbg) Run:  kubectl --context addons-20220602171222-283122 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [f06c03a8-38e8-4b39-a55f-5403e0e1be3e] Pending
helpers_test.go:342: "task-pv-pod-restore" [f06c03a8-38e8-4b39-a55f-5403e0e1be3e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [f06c03a8-38e8-4b39-a55f-5403e0e1be3e] Running
addons_test.go:571: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 12.009136951s
addons_test.go:576: (dbg) Run:  kubectl --context addons-20220602171222-283122 delete pod task-pv-pod-restore
addons_test.go:580: (dbg) Run:  kubectl --context addons-20220602171222-283122 delete pvc hpvc-restore
addons_test.go:584: (dbg) Run:  kubectl --context addons-20220602171222-283122 delete volumesnapshot new-snapshot-demo
addons_test.go:588: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220602171222-283122 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:588: (dbg) Done: out/minikube-linux-amd64 -p addons-20220602171222-283122 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.147429266s)
addons_test.go:592: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220602171222-283122 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (44.92s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (38.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:603: (dbg) Run:  kubectl --context addons-20220602171222-283122 create -f testdata/busybox.yaml
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [9c7fc771-e872-4a39-9cde-3e86a4911d3d] Pending
helpers_test.go:342: "busybox" [9c7fc771-e872-4a39-9cde-3e86a4911d3d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [9c7fc771-e872-4a39-9cde-3e86a4911d3d] Running
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 8.013561181s
addons_test.go:615: (dbg) Run:  kubectl --context addons-20220602171222-283122 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:652: (dbg) Run:  kubectl --context addons-20220602171222-283122 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:665: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220602171222-283122 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:665: (dbg) Done: out/minikube-linux-amd64 -p addons-20220602171222-283122 addons disable gcp-auth --alsologtostderr -v=1: (5.839836156s)
addons_test.go:681: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220602171222-283122 addons enable gcp-auth
addons_test.go:687: (dbg) Run:  kubectl --context addons-20220602171222-283122 apply -f testdata/private-image.yaml
addons_test.go:694: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:342: "private-image-7f8587d5b7-9lplm" [acacdcdb-774f-4c94-8146-22665a24a29d] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:342: "private-image-7f8587d5b7-9lplm" [acacdcdb-774f-4c94-8146-22665a24a29d] Running
addons_test.go:694: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 12.008292667s
addons_test.go:700: (dbg) Run:  kubectl --context addons-20220602171222-283122 apply -f testdata/private-image-eu.yaml
addons_test.go:705: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:342: "private-image-eu-869dcfd8c7-7dwpz" [2c08b21e-44cb-41f6-9f2a-8b69014b7dbc] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:342: "private-image-eu-869dcfd8c7-7dwpz" [2c08b21e-44cb-41f6-9f2a-8b69014b7dbc] Running
addons_test.go:705: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 10.007116578s
--- PASS: TestAddons/serial/GCPAuth (38.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.16s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:132: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-20220602171222-283122
addons_test.go:132: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20220602171222-283122: (10.943147581s)
addons_test.go:136: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-20220602171222-283122
addons_test.go:140: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-20220602171222-283122
--- PASS: TestAddons/StoppedEnableDisable (11.16s)

                                                
                                    
x
+
TestCertOptions (32.79s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20220602175818-283122 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20220602175818-283122 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (26.4213549s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20220602175818-283122 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-20220602175818-283122 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-20220602175818-283122 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-20220602175818-283122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20220602175818-283122
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20220602175818-283122: (5.539653779s)
--- PASS: TestCertOptions (32.79s)

                                                
                                    
x
+
TestCertExpiration (214.16s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220602175746-283122 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220602175746-283122 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (27.131477403s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220602175746-283122 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220602175746-283122 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (4.425533999s)
helpers_test.go:175: Cleaning up "cert-expiration-20220602175746-283122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-20220602175746-283122
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-20220602175746-283122: (2.605572218s)
--- PASS: TestCertExpiration (214.16s)

                                                
                                    
x
+
TestDockerFlags (31.54s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-20220602175747-283122 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-20220602175747-283122 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (28.229398167s)
docker_test.go:50: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20220602175747-283122 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20220602175747-283122 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-20220602175747-283122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-20220602175747-283122
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-20220602175747-283122: (2.474564798s)
--- PASS: TestDockerFlags (31.54s)

                                                
                                    
x
+
TestForceSystemdFlag (68.08s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20220602175454-283122 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20220602175454-283122 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (1m5.11824425s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20220602175454-283122 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-20220602175454-283122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20220602175454-283122
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20220602175454-283122: (2.418735009s)
--- PASS: TestForceSystemdFlag (68.08s)

                                                
                                    
x
+
TestForceSystemdEnv (34.28s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20220602175842-283122 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20220602175842-283122 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (31.231368916s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20220602175842-283122 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-20220602175842-283122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20220602175842-283122
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20220602175842-283122: (2.563347415s)
--- PASS: TestForceSystemdEnv (34.28s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.5s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.50s)

                                                
                                    
x
+
TestErrorSpam/setup (26.28s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20220602171820-283122 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220602171820-283122 --driver=docker  --container-runtime=docker
error_spam_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20220602171820-283122 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220602171820-283122 --driver=docker  --container-runtime=docker: (26.276017801s)
--- PASS: TestErrorSpam/setup (26.28s)

                                                
                                    
x
+
TestErrorSpam/start (1.06s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220602171820-283122 --log_dir /tmp/nospam-20220602171820-283122 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220602171820-283122 --log_dir /tmp/nospam-20220602171820-283122 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220602171820-283122 --log_dir /tmp/nospam-20220602171820-283122 start --dry-run
--- PASS: TestErrorSpam/start (1.06s)

                                                
                                    
x
+
TestErrorSpam/status (1.22s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220602171820-283122 --log_dir /tmp/nospam-20220602171820-283122 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220602171820-283122 --log_dir /tmp/nospam-20220602171820-283122 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220602171820-283122 --log_dir /tmp/nospam-20220602171820-283122 status
--- PASS: TestErrorSpam/status (1.22s)

                                                
                                    
x
+
TestErrorSpam/pause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220602171820-283122 --log_dir /tmp/nospam-20220602171820-283122 pause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220602171820-283122 --log_dir /tmp/nospam-20220602171820-283122 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220602171820-283122 --log_dir /tmp/nospam-20220602171820-283122 pause
--- PASS: TestErrorSpam/pause (1.56s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220602171820-283122 --log_dir /tmp/nospam-20220602171820-283122 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220602171820-283122 --log_dir /tmp/nospam-20220602171820-283122 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220602171820-283122 --log_dir /tmp/nospam-20220602171820-283122 unpause
--- PASS: TestErrorSpam/unpause (1.69s)

                                                
                                    
x
+
TestErrorSpam/stop (11.02s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220602171820-283122 --log_dir /tmp/nospam-20220602171820-283122 stop
E0602 17:18:55.532220  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
E0602 17:18:55.538098  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
E0602 17:18:55.548407  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
E0602 17:18:55.568741  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
E0602 17:18:55.609131  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
E0602 17:18:55.689599  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
E0602 17:18:55.850069  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
E0602 17:18:56.170817  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
E0602 17:18:56.811862  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
E0602 17:18:58.092404  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
E0602 17:19:00.654390  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220602171820-283122 --log_dir /tmp/nospam-20220602171820-283122 stop: (10.730334213s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220602171820-283122 --log_dir /tmp/nospam-20220602171820-283122 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220602171820-283122 --log_dir /tmp/nospam-20220602171820-283122 stop
--- PASS: TestErrorSpam/stop (11.02s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/files/etc/test/nested/copy/283122/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (39.26s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220602171905-283122 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
E0602 17:19:05.775177  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
E0602 17:19:16.015385  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
E0602 17:19:36.495967  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
functional_test.go:2160: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220602171905-283122 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (39.257363955s)
--- PASS: TestFunctional/serial/StartWithProxy (39.26s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (254.12s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220602171905-283122 --alsologtostderr -v=8
E0602 17:20:17.456480  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
E0602 17:21:39.377353  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
E0602 17:23:55.531489  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
functional_test.go:651: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220602171905-283122 --alsologtostderr -v=8: (4m14.116306582s)
functional_test.go:655: soft start took 4m14.11707923s for "functional-20220602171905-283122" cluster.
--- PASS: TestFunctional/serial/SoftStart (254.12s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220602171905-283122 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-linux-amd64 -p functional-20220602171905-283122 cache add k8s.gcr.io/pause:3.3: (1.380632684s)
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 cache add k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220602171905-283122 /tmp/TestFunctionalserialCacheCmdcacheadd_local787312768/001
functional_test.go:1081: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 cache add minikube-local-cache-test:functional-20220602171905-283122
functional_test.go:1086: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 cache delete minikube-local-cache-test:functional-20220602171905-283122
functional_test.go:1075: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220602171905-283122
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (384.634543ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 cache reload
functional_test.go:1155: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 kubectl -- --context functional-20220602171905-283122 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.26s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out/kubectl --context functional-20220602171905-283122 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (19.77s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220602171905-283122 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0602 17:24:23.217841  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
functional_test.go:749: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220602171905-283122 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (19.770154366s)
functional_test.go:753: restart took 19.770286953s for "functional-20220602171905-283122" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (19.77s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 logs
functional_test.go:1228: (dbg) Done: out/minikube-linux-amd64 -p functional-20220602171905-283122 logs: (1.404347395s)
--- PASS: TestFunctional/serial/LogsCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 logs --file /tmp/TestFunctionalserialLogsFileCmd4222275075/001/logs.txt
functional_test.go:1242: (dbg) Done: out/minikube-linux-amd64 -p functional-20220602171905-283122 logs --file /tmp/TestFunctionalserialLogsFileCmd4222275075/001/logs.txt: (1.435591554s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220602171905-283122 config get cpus: exit status 14 (89.719346ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 config set cpus 2

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 config get cpus
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 config unset cpus
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220602171905-283122 config get cpus: exit status 14 (83.707536ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220602171905-283122 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220602171905-283122 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (264.35176ms)

                                                
                                                
-- stdout --
	* [functional-20220602171905-283122] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14269
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 17:24:53.808162  324245 out.go:296] Setting OutFile to fd 1 ...
	I0602 17:24:53.808297  324245 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:24:53.808309  324245 out.go:309] Setting ErrFile to fd 2...
	I0602 17:24:53.808314  324245 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:24:53.808442  324245 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 17:24:53.808710  324245 out.go:303] Setting JSON to false
	I0602 17:24:53.809955  324245 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7647,"bootTime":1654183047,"procs":351,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0602 17:24:53.810045  324245 start.go:125] virtualization: kvm guest
	I0602 17:24:53.812824  324245 out.go:177] * [functional-20220602171905-283122] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0602 17:24:53.814748  324245 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 17:24:53.816380  324245 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 17:24:53.818163  324245 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 17:24:53.819860  324245 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 17:24:53.821549  324245 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0602 17:24:53.823732  324245 config.go:178] Loaded profile config "functional-20220602171905-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 17:24:53.824194  324245 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 17:24:53.865940  324245 docker.go:137] docker version: linux-20.10.16
	I0602 17:24:53.866077  324245 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 17:24:53.987252  324245 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:72 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:40 SystemTime:2022-06-02 17:24:53.90001556 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 17:24:53.987365  324245 docker.go:254] overlay module found
	I0602 17:24:53.989887  324245 out.go:177] * Using the docker driver based on existing profile
	I0602 17:24:53.991473  324245 start.go:284] selected driver: docker
	I0602 17:24:53.991505  324245 start.go:806] validating driver "docker" against &{Name:functional-20220602171905-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220602171905-283122 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false r
egistry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 17:24:53.991660  324245 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 17:24:53.994520  324245 out.go:177] 
	W0602 17:24:53.996269  324245 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0602 17:24:53.997844  324245 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220602171905-283122 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220602171905-283122 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220602171905-283122 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (295.407881ms)

                                                
                                                
-- stdout --
	* [functional-20220602171905-283122] minikube v1.26.0-beta.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14269
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 17:24:35.184898  320819 out.go:296] Setting OutFile to fd 1 ...
	I0602 17:24:35.185080  320819 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:24:35.185088  320819 out.go:309] Setting ErrFile to fd 2...
	I0602 17:24:35.185096  320819 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:24:35.185359  320819 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 17:24:35.185758  320819 out.go:303] Setting JSON to false
	I0602 17:24:35.187396  320819 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7629,"bootTime":1654183047,"procs":333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0602 17:24:35.187479  320819 start.go:125] virtualization: kvm guest
	I0602 17:24:35.195211  320819 out.go:177] * [functional-20220602171905-283122] minikube v1.26.0-beta.1 sur Ubuntu 20.04 (kvm/amd64)
	I0602 17:24:35.197517  320819 out.go:177]   - MINIKUBE_LOCATION=14269
	I0602 17:24:35.199303  320819 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0602 17:24:35.200866  320819 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	I0602 17:24:35.202432  320819 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	I0602 17:24:35.203883  320819 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0602 17:24:35.205827  320819 config.go:178] Loaded profile config "functional-20220602171905-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 17:24:35.206279  320819 driver.go:358] Setting default libvirt URI to qemu:///system
	I0602 17:24:35.252142  320819 docker.go:137] docker version: linux-20.10.16
	I0602 17:24:35.252262  320819 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 17:24:35.379878  320819 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:72 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:41 SystemTime:2022-06-02 17:24:35.289089091 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 17:24:35.379980  320819 docker.go:254] overlay module found
	I0602 17:24:35.382741  320819 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0602 17:24:35.384076  320819 start.go:284] selected driver: docker
	I0602 17:24:35.384106  320819 start.go:806] validating driver "docker" against &{Name:functional-20220602171905-283122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1654032859-14252@sha256:6460c031afce844e0e3c071f4bf5274136c9036e4954d4d6fe2b32ad73fc3496 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220602171905-283122 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false r
egistry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0602 17:24:35.384290  320819 start.go:817] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0602 17:24:35.395412  320819 out.go:177] 
	W0602 17:24:35.396840  320819 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0602 17:24:35.398199  320819 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:852: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:864: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (15.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220602171905-283122 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1438: (dbg) Run:  kubectl --context functional-20220602171905-283122 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54fbb85-lqgxj" [61b390e9-5c78-416a-8716-8db0ccafce37] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54fbb85-lqgxj" [61b390e9-5c78-416a-8716-8db0ccafce37] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 11.102926219s
functional_test.go:1448: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1448: (dbg) Done: out/minikube-linux-amd64 -p functional-20220602171905-283122 service list: (1.882544454s)
functional_test.go:1462: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1475: found endpoint: https://192.168.49.2:32642
functional_test.go:1490: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1504: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 service hello-node --url
functional_test.go:1510: found endpoint for hello-node: http://192.168.49.2:32642
--- PASS: TestFunctional/parallel/ServiceCmd (15.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220602171905-283122 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1564: (dbg) Run:  kubectl --context functional-20220602171905-283122 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-74cf8bc446-hqrhr" [114f13e9-e9be-4cea-8f64-a34f4562d232] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-74cf8bc446-hqrhr" [114f13e9-e9be-4cea-8f64-a34f4562d232] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.007823821s
functional_test.go:1578: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 service hello-node-connect --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1578: (dbg) Done: out/minikube-linux-amd64 -p functional-20220602171905-283122 service hello-node-connect --url: (1.04300202s)
functional_test.go:1584: found endpoint for hello-node-connect: http://192.168.49.2:31394
functional_test.go:1604: http://192.168.49.2:31394: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-74cf8bc446-hqrhr

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=172.17.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31394
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1631: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh -n functional-20220602171905-283122 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 cp functional-20220602171905-283122:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3987676324/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh -n functional-20220602171905-283122 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220602171905-283122 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-h4hfc" [1186af24-338d-4359-a332-8818234c23d7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-h4hfc" [1186af24-338d-4359-a332-8818234c23d7] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.014838136s
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220602171905-283122 exec mysql-b87c45988-h4hfc -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220602171905-283122 exec mysql-b87c45988-h4hfc -- mysql -ppassword -e "show databases;": exit status 1 (175.187826ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220602171905-283122 exec mysql-b87c45988-h4hfc -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220602171905-283122 exec mysql-b87c45988-h4hfc -- mysql -ppassword -e "show databases;": exit status 1 (151.919117ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220602171905-283122 exec mysql-b87c45988-h4hfc -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220602171905-283122 exec mysql-b87c45988-h4hfc -- mysql -ppassword -e "show databases;": exit status 1 (225.490938ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220602171905-283122 exec mysql-b87c45988-h4hfc -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.61s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/283122/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh "sudo cat /etc/test/nested/copy/283122/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1862: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/283122.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh "sudo cat /etc/ssl/certs/283122.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/283122.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh "sudo cat /usr/share/ca-certificates/283122.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /etc/ssl/certs/2831222.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh "sudo cat /etc/ssl/certs/2831222.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/2831222.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh "sudo cat /usr/share/ca-certificates/2831222.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220602171905-283122 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh "sudo systemctl is-active crio": exit status 1 (469.436821ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 version --short
--- PASS: TestFunctional/parallel/Version/short (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220602171905-283122 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.6
k8s.gcr.io/kube-proxy:v1.23.6
k8s.gcr.io/kube-controller-manager:v1.23.6
k8s.gcr.io/kube-apiserver:v1.23.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220602171905-283122
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220602171905-283122
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 image ls --format table
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220602171905-283122 image ls --format table:
|---------------------------------------------|----------------------------------|---------------|--------|
|                    Image                    |               Tag                |   Image ID    |  Size  |
|---------------------------------------------|----------------------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner     | v5                               | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/pause                            | 3.1                              | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/pause                            | 3.6                              | 6270bb605e12e | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                     | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/echoserver                       | 1.8                              | 82e4c8a736a4f | 95.4MB |
| k8s.gcr.io/kube-apiserver                   | v1.23.6                          | 8fa62c12256df | 135MB  |
| k8s.gcr.io/kube-scheduler                   | v1.23.6                          | 595f327f224a4 | 53.5MB |
| k8s.gcr.io/kube-controller-manager          | v1.23.6                          | df7b72818ad2e | 125MB  |
| k8s.gcr.io/pause                            | latest                           | 350b164e7ae1d | 240kB  |
| docker.io/kubernetesui/dashboard            | <none>                           | 1042d9e0d8fcc | 246MB  |
| k8s.gcr.io/kube-proxy                       | v1.23.6                          | 4c03754524064 | 112MB  |
| docker.io/library/mysql                     | 5.7                              | 2a0961b7de03c | 462MB  |
| docker.io/library/nginx                     | alpine                           | b1c3acb288825 | 23.4MB |
| gcr.io/k8s-minikube/busybox                 | latest                           | beae173ccac6a | 1.24MB |
| k8s.gcr.io/etcd                             | 3.5.1-0                          | 25f8c7f3da61c | 293MB  |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                           | a4ca41631cc7a | 46.8MB |
| gcr.io/google-containers/addon-resizer      | functional-20220602171905-283122 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/minikube-local-cache-test | functional-20220602171905-283122 | 1d29da4bbd994 | 30B    |
| docker.io/kubernetesui/metrics-scraper      | <none>                           | 115053965e86b | 43.8MB |
| k8s.gcr.io/pause                            | 3.3                              | 0184c1613d929 | 683kB  |
|---------------------------------------------|----------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 image ls --format json
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220602171905-283122 image ls --format json:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"293000000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repo
Digests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"2a0961b7de03c7b11afd13fec09d0d30992b6e0b4f947a4aba4273723778ed95","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"462000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"1d29da4bbd994855efb301f1ed49524e69ffdc796027fde691a4eff777bb7bb9","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220602171905-283122"],"size":"30"},{"id":"b1c3acb28882519cf6d3a4d7fe2b21d0ae20bde9cfd2c08a7de057f8cfccff15","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23400000"},{"id":"595f327f224a42213913a39d224c8aceb96c81ad3909ae13f6045f570aafe8f0","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.6"],"size":"53500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:fun
ctional-20220602171905-283122"],"size":"32900000"},{"id":"1042d9e0d8fcc64f2c6b9ade3af9e8ed255fa04d18d838d0b3650ad7636534a9","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"8fa62c12256df9d9d0c3f1cf90856e27d90f209f42271c2f19326a705342c3b6","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.6"],"size":"135000000"},{"id":"4c037545240644e87d79f6b4071331f9adea6176339c98e529b4af8af00d4e47","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.6"],"size":"112000000"},{"id":"df7b72818ad2e4f1f204c7ffb51239de67f49c6b22671c70354ee5d65ac37657","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.6"],"size":"125000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31
500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220602171905-283122 image ls --format yaml:
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220602171905-283122
size: "32900000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 1d29da4bbd994855efb301f1ed49524e69ffdc796027fde691a4eff777bb7bb9
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220602171905-283122
size: "30"
- id: b1c3acb28882519cf6d3a4d7fe2b21d0ae20bde9cfd2c08a7de057f8cfccff15
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23400000"
- id: 8fa62c12256df9d9d0c3f1cf90856e27d90f209f42271c2f19326a705342c3b6
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.6
size: "135000000"
- id: 595f327f224a42213913a39d224c8aceb96c81ad3909ae13f6045f570aafe8f0
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.6
size: "53500000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: 1042d9e0d8fcc64f2c6b9ade3af9e8ed255fa04d18d838d0b3650ad7636534a9
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: df7b72818ad2e4f1f204c7ffb51239de67f49c6b22671c70354ee5d65ac37657
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.6
size: "125000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 2a0961b7de03c7b11afd13fec09d0d30992b6e0b4f947a4aba4273723778ed95
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "462000000"
- id: 4c037545240644e87d79f6b4071331f9adea6176339c98e529b4af8af00d4e47
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.6
size: "112000000"
- id: 25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "293000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh pgrep buildkitd
functional_test.go:303: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh pgrep buildkitd: exit status 1 (370.897338ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 image build -t localhost/my-image:functional-20220602171905-283122 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p functional-20220602171905-283122 image build -t localhost/my-image:functional-20220602171905-283122 testdata/build: (1.500653858s)
functional_test.go:315: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220602171905-283122 image build -t localhost/my-image:functional-20220602171905-283122 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 3f1df46a144f
Removing intermediate container 3f1df46a144f
---> fb631017a24d
Step 3/3 : ADD content.txt /
---> 34142f37b8a7
Successfully built 34142f37b8a7
Successfully tagged localhost/my-image:functional-20220602171905-283122
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220602171905-283122
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: Took "464.142999ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1324: Took "91.210543ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:491: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20220602171905-283122 docker-env) && out/minikube-linux-amd64 status -p functional-20220602171905-283122"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/bash
functional_test.go:491: (dbg) Done: /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20220602171905-283122 docker-env) && out/minikube-linux-amd64 status -p functional-20220602171905-283122": (1.054852988s)
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20220602171905-283122 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: Took "513.258717ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1374: Took "98.926744ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220602171905-283122

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-linux-amd64 -p functional-20220602171905-283122 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220602171905-283122: (5.923083694s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20220602171905-283122 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (19.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220602171905-283122 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [ea969128-1db3-49a1-81a6-4aa2e45db4d6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [ea969128-1db3-49a1-81a6-4aa2e45db4d6] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 19.010783082s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (19.23s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (16.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220602171905-283122 /tmp/TestFunctionalparallelMountCmdany-port329315777/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1654190675402797127" to /tmp/TestFunctionalparallelMountCmdany-port329315777/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1654190675402797127" to /tmp/TestFunctionalparallelMountCmdany-port329315777/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1654190675402797127" to /tmp/TestFunctionalparallelMountCmdany-port329315777/001/test-1654190675402797127
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (637.616969ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh -- ls -la /mount-9p
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun  2 17:24 created-by-test
-rw-r--r-- 1 docker docker 24 Jun  2 17:24 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun  2 17:24 test-1654190675402797127
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh cat /mount-9p/test-1654190675402797127

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220602171905-283122 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [cb0ba726-6431-414e-ac37-63960834122f] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [cb0ba726-6431-414e-ac37-63960834122f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [cb0ba726-6431-414e-ac37-63960834122f] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [cb0ba726-6431-414e-ac37-63960834122f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 12.008912186s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220602171905-283122 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220602171905-283122 /tmp/TestFunctionalparallelMountCmdany-port329315777/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (16.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220602171905-283122

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-linux-amd64 -p functional-20220602171905-283122 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220602171905-283122: (2.502520803s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220602171905-283122
functional_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220602171905-283122

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:240: (dbg) Done: out/minikube-linux-amd64 -p functional-20220602171905-283122 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220602171905-283122: (5.058259329s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 image save gcr.io/google-containers/addon-resizer:functional-20220602171905-283122 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 image rm gcr.io/google-containers/addon-resizer:functional-20220602171905-283122
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Done: out/minikube-linux-amd64 -p functional-20220602171905-283122 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar: (1.256939913s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220602171905-283122
functional_test.go:419: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220602171905-283122

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Done: out/minikube-linux-amd64 -p functional-20220602171905-283122 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220602171905-283122: (2.653644164s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220602171905-283122
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.73s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220602171905-283122 /tmp/TestFunctionalparallelMountCmdspecific-port507429112/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (434.046128ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220602171905-283122 /tmp/TestFunctionalparallelMountCmdspecific-port507429112/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh "sudo umount -f /mount-9p": exit status 1 (421.981562ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-linux-amd64 -p functional-20220602171905-283122 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220602171905-283122 /tmp/TestFunctionalparallelMountCmdspecific-port507429112/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220602171905-283122 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.101.23.193 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-20220602171905-283122 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220602171905-283122
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220602171905-283122
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220602171905-283122
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (55.69s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-20220602173000-283122 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-20220602173000-283122 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (55.688735797s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (55.69s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.72s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220602173000-283122 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220602173000-283122 addons enable ingress --alsologtostderr -v=5: (11.72360997s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.72s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.38s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220602173000-283122 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.38s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (40.03s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:162: (dbg) Run:  kubectl --context ingress-addon-legacy-20220602173000-283122 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:162: (dbg) Done: kubectl --context ingress-addon-legacy-20220602173000-283122 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (10.805749s)
addons_test.go:182: (dbg) Run:  kubectl --context ingress-addon-legacy-20220602173000-283122 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:195: (dbg) Run:  kubectl --context ingress-addon-legacy-20220602173000-283122 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [d0c48150-973f-44b7-b9e5-94eb2f7649b4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [d0c48150-973f-44b7-b9e5-94eb2f7649b4] Running
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.006658876s
addons_test.go:212: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220602173000-283122 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:236: (dbg) Run:  kubectl --context ingress-addon-legacy-20220602173000-283122 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220602173000-283122 ip
addons_test.go:247: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220602173000-283122 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220602173000-283122 addons disable ingress-dns --alsologtostderr -v=1: (11.552143497s)
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220602173000-283122 addons disable ingress --alsologtostderr -v=1
addons_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220602173000-283122 addons disable ingress --alsologtostderr -v=1: (7.280327355s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (40.03s)

                                                
                                    
x
+
TestJSONOutput/start/Command (41.11s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-20220602173150-283122 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20220602173150-283122 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (41.110420028s)
--- PASS: TestJSONOutput/start/Command (41.11s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-20220602173150-283122 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-20220602173150-283122 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.95s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-20220602173150-283122 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20220602173150-283122 --output=json --user=testUser: (10.946911212s)
--- PASS: TestJSONOutput/stop/Command (10.95s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20220602173245-283122 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20220602173245-283122 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (87.429989ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"33def47a-c7a4-4bbb-88e1-9e2e09b7cedf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220602173245-283122] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"248cb54f-f73f-4f52-8727-86b9d6c53e09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14269"}}
	{"specversion":"1.0","id":"efde1b4a-45e5-41fd-93c4-10f504c9565f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"922f7e77-b17a-4403-89bf-1cc2bb84574a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig"}}
	{"specversion":"1.0","id":"18d0f442-5596-4783-9ffa-79f984f77f17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube"}}
	{"specversion":"1.0","id":"da763069-588e-483d-adff-67d44dc3263e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9476c80c-dceb-4013-99f7-a066aa6f4da1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220602173245-283122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20220602173245-283122
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.29s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220602173246-283122 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220602173246-283122 --network=: (24.980757384s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220602173246-283122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220602173246-283122
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220602173246-283122: (2.279788055s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.29s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (27.72s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220602173313-283122 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220602173313-283122 --network=bridge: (25.587122955s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220602173313-283122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220602173313-283122
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220602173313-283122: (2.094685817s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (27.72s)

                                                
                                    
x
+
TestKicExistingNetwork (27.77s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20220602173341-283122 --network=existing-network
E0602 17:33:55.532182  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20220602173341-283122 --network=existing-network: (25.263705174s)
helpers_test.go:175: Cleaning up "existing-network-20220602173341-283122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20220602173341-283122
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20220602173341-283122: (2.273379221s)
--- PASS: TestKicExistingNetwork (27.77s)

                                                
                                    
x
+
TestKicCustomSubnet (28.03s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-20220602173409-283122 --subnet=192.168.60.0/24
E0602 17:34:33.161365  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
E0602 17:34:33.166715  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
E0602 17:34:33.177047  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
E0602 17:34:33.197340  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
E0602 17:34:33.237670  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
E0602 17:34:33.317993  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
E0602 17:34:33.478440  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
E0602 17:34:33.799558  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
E0602 17:34:34.440483  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-20220602173409-283122 --subnet=192.168.60.0/24: (25.70667616s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220602173409-283122 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-20220602173409-283122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-20220602173409-283122
E0602 17:34:35.720837  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-20220602173409-283122: (2.289654053s)
--- PASS: TestKicCustomSubnet (28.03s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (56.29s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-20220602173437-283122 --driver=docker  --container-runtime=docker
E0602 17:34:38.281717  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
E0602 17:34:43.402362  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
E0602 17:34:53.643150  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-20220602173437-283122 --driver=docker  --container-runtime=docker: (25.008812288s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-20220602173437-283122 --driver=docker  --container-runtime=docker
E0602 17:35:14.124042  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
E0602 17:35:18.578195  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-20220602173437-283122 --driver=docker  --container-runtime=docker: (25.257305236s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-20220602173437-283122
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-20220602173437-283122
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-20220602173437-283122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-20220602173437-283122
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-20220602173437-283122: (2.396539782s)
helpers_test.go:175: Cleaning up "first-20220602173437-283122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-20220602173437-283122
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-20220602173437-283122: (2.268858764s)
--- PASS: TestMinikubeProfile (56.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-20220602173533-283122 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-20220602173533-283122 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (4.928684357s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-20220602173533-283122 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220602173533-283122 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220602173533-283122 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (4.689414326s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220602173533-283122 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.76s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-20220602173533-283122 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-20220602173533-283122 --alsologtostderr -v=5: (1.755924683s)
--- PASS: TestMountStart/serial/DeleteFirst (1.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220602173533-283122 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-20220602173533-283122
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-20220602173533-283122: (1.275132165s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.73s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220602173533-283122
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220602173533-283122: (5.731581102s)
E0602 17:35:55.084426  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
--- PASS: TestMountStart/serial/RestartStopped (6.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220602173533-283122 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (72.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220602173558-283122 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0602 17:36:07.942258  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
E0602 17:36:07.947608  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
E0602 17:36:07.957918  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
E0602 17:36:07.978223  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
E0602 17:36:08.019177  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
E0602 17:36:08.099850  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
E0602 17:36:08.260322  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
E0602 17:36:08.580925  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
E0602 17:36:09.221316  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
E0602 17:36:10.502246  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
E0602 17:36:13.063488  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
E0602 17:36:18.184596  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
E0602 17:36:28.425840  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
E0602 17:36:48.906122  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220602173558-283122 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m11.534494745s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (72.13s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220602173558-283122 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20220602173558-283122 -v 3 --alsologtostderr: (26.200471228s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.00s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (12.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 cp testdata/cp-test.txt multinode-20220602173558-283122:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 ssh -n multinode-20220602173558-283122 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 cp multinode-20220602173558-283122:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1662095504/001/cp-test_multinode-20220602173558-283122.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 ssh -n multinode-20220602173558-283122 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 cp multinode-20220602173558-283122:/home/docker/cp-test.txt multinode-20220602173558-283122-m02:/home/docker/cp-test_multinode-20220602173558-283122_multinode-20220602173558-283122-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 ssh -n multinode-20220602173558-283122 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 ssh -n multinode-20220602173558-283122-m02 "sudo cat /home/docker/cp-test_multinode-20220602173558-283122_multinode-20220602173558-283122-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 cp multinode-20220602173558-283122:/home/docker/cp-test.txt multinode-20220602173558-283122-m03:/home/docker/cp-test_multinode-20220602173558-283122_multinode-20220602173558-283122-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 ssh -n multinode-20220602173558-283122 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 ssh -n multinode-20220602173558-283122-m03 "sudo cat /home/docker/cp-test_multinode-20220602173558-283122_multinode-20220602173558-283122-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 cp testdata/cp-test.txt multinode-20220602173558-283122-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 ssh -n multinode-20220602173558-283122-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 cp multinode-20220602173558-283122-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1662095504/001/cp-test_multinode-20220602173558-283122-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 ssh -n multinode-20220602173558-283122-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 cp multinode-20220602173558-283122-m02:/home/docker/cp-test.txt multinode-20220602173558-283122:/home/docker/cp-test_multinode-20220602173558-283122-m02_multinode-20220602173558-283122.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 ssh -n multinode-20220602173558-283122-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 ssh -n multinode-20220602173558-283122 "sudo cat /home/docker/cp-test_multinode-20220602173558-283122-m02_multinode-20220602173558-283122.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 cp multinode-20220602173558-283122-m02:/home/docker/cp-test.txt multinode-20220602173558-283122-m03:/home/docker/cp-test_multinode-20220602173558-283122-m02_multinode-20220602173558-283122-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 ssh -n multinode-20220602173558-283122-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 ssh -n multinode-20220602173558-283122-m03 "sudo cat /home/docker/cp-test_multinode-20220602173558-283122-m02_multinode-20220602173558-283122-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 cp testdata/cp-test.txt multinode-20220602173558-283122-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 ssh -n multinode-20220602173558-283122-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 cp multinode-20220602173558-283122-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1662095504/001/cp-test_multinode-20220602173558-283122-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 ssh -n multinode-20220602173558-283122-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 cp multinode-20220602173558-283122-m03:/home/docker/cp-test.txt multinode-20220602173558-283122:/home/docker/cp-test_multinode-20220602173558-283122-m03_multinode-20220602173558-283122.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 ssh -n multinode-20220602173558-283122-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 ssh -n multinode-20220602173558-283122 "sudo cat /home/docker/cp-test_multinode-20220602173558-283122-m03_multinode-20220602173558-283122.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 cp multinode-20220602173558-283122-m03:/home/docker/cp-test.txt multinode-20220602173558-283122-m02:/home/docker/cp-test_multinode-20220602173558-283122-m03_multinode-20220602173558-283122-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 ssh -n multinode-20220602173558-283122-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 ssh -n multinode-20220602173558-283122-m02 "sudo cat /home/docker/cp-test_multinode-20220602173558-283122-m03_multinode-20220602173558-283122-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (12.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220602173558-283122 node stop m03: (1.294395653s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220602173558-283122 status: exit status 7 (638.539174ms)

                                                
                                                
-- stdout --
	multinode-20220602173558-283122
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220602173558-283122-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220602173558-283122-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220602173558-283122 status --alsologtostderr: exit status 7 (633.374653ms)

                                                
                                                
-- stdout --
	multinode-20220602173558-283122
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220602173558-283122-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220602173558-283122-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 17:46:02.687495  396098 out.go:296] Setting OutFile to fd 1 ...
	I0602 17:46:02.687637  396098 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:46:02.687653  396098 out.go:309] Setting ErrFile to fd 2...
	I0602 17:46:02.687661  396098 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:46:02.687790  396098 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 17:46:02.688073  396098 out.go:303] Setting JSON to false
	I0602 17:46:02.688103  396098 mustload.go:65] Loading cluster: multinode-20220602173558-283122
	I0602 17:46:02.688456  396098 config.go:178] Loaded profile config "multinode-20220602173558-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 17:46:02.688476  396098 status.go:253] checking status of multinode-20220602173558-283122 ...
	I0602 17:46:02.688914  396098 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122 --format={{.State.Status}}
	I0602 17:46:02.723059  396098 status.go:328] multinode-20220602173558-283122 host status = "Running" (err=<nil>)
	I0602 17:46:02.723106  396098 host.go:66] Checking if "multinode-20220602173558-283122" exists ...
	I0602 17:46:02.723393  396098 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220602173558-283122
	I0602 17:46:02.756754  396098 host.go:66] Checking if "multinode-20220602173558-283122" exists ...
	I0602 17:46:02.757080  396098 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 17:46:02.757141  396098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122
	I0602 17:46:02.791620  396098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49517 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122/id_rsa Username:docker}
	I0602 17:46:02.873723  396098 ssh_runner.go:195] Run: systemctl --version
	I0602 17:46:02.877450  396098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 17:46:02.887021  396098 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0602 17:46:02.992912  396098 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:71 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-06-02 17:46:02.916730416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1027-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662787584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.16 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16 Expected:212e8b6fa2f44b9c21b2798135fc6fb7c53efc16} RuncCommit:{ID:v1.1.1-0-g52de29d Expected:v1.1.1-0-g52de29d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.5.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0602 17:46:02.993614  396098 kubeconfig.go:92] found "multinode-20220602173558-283122" server: "https://192.168.49.2:8443"
	I0602 17:46:02.993643  396098 api_server.go:165] Checking apiserver status ...
	I0602 17:46:02.993675  396098 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0602 17:46:03.003207  396098 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1706/cgroup
	I0602 17:46:03.010538  396098 api_server.go:181] apiserver freezer: "11:freezer:/docker/96c9bda474fd4a427ecb8f5b9f99b269e586546e76098bc8dfc3d3b9a5f8cd9a/kubepods/burstable/pod9b88133a9e18b6ec7d53499c3c2debcc/9c2cc422d2c1bf5c6fd088abd5e66de2ecba52f38401b50a0f7d9baf208e7f28"
	I0602 17:46:03.010602  396098 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/96c9bda474fd4a427ecb8f5b9f99b269e586546e76098bc8dfc3d3b9a5f8cd9a/kubepods/burstable/pod9b88133a9e18b6ec7d53499c3c2debcc/9c2cc422d2c1bf5c6fd088abd5e66de2ecba52f38401b50a0f7d9baf208e7f28/freezer.state
	I0602 17:46:03.017068  396098 api_server.go:203] freezer state: "THAWED"
	I0602 17:46:03.017104  396098 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0602 17:46:03.021963  396098 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0602 17:46:03.021988  396098 status.go:419] multinode-20220602173558-283122 apiserver status = Running (err=<nil>)
	I0602 17:46:03.021999  396098 status.go:255] multinode-20220602173558-283122 status: &{Name:multinode-20220602173558-283122 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0602 17:46:03.022021  396098 status.go:253] checking status of multinode-20220602173558-283122-m02 ...
	I0602 17:46:03.022255  396098 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122-m02 --format={{.State.Status}}
	I0602 17:46:03.057798  396098 status.go:328] multinode-20220602173558-283122-m02 host status = "Running" (err=<nil>)
	I0602 17:46:03.057836  396098 host.go:66] Checking if "multinode-20220602173558-283122-m02" exists ...
	I0602 17:46:03.058193  396098 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220602173558-283122-m02
	I0602 17:46:03.093097  396098 host.go:66] Checking if "multinode-20220602173558-283122-m02" exists ...
	I0602 17:46:03.093382  396098 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0602 17:46:03.093430  396098 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220602173558-283122-m02
	I0602 17:46:03.128048  396098 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49522 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/machines/multinode-20220602173558-283122-m02/id_rsa Username:docker}
	I0602 17:46:03.210116  396098 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0602 17:46:03.220946  396098 status.go:255] multinode-20220602173558-283122-m02 status: &{Name:multinode-20220602173558-283122-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0602 17:46:03.220990  396098 status.go:253] checking status of multinode-20220602173558-283122-m03 ...
	I0602 17:46:03.221266  396098 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122-m03 --format={{.State.Status}}
	I0602 17:46:03.254323  396098 status.go:328] multinode-20220602173558-283122-m03 host status = "Stopped" (err=<nil>)
	I0602 17:46:03.254363  396098 status.go:341] host is not running, skipping remaining checks
	I0602 17:46:03.254371  396098 status.go:255] multinode-20220602173558-283122-m03 status: &{Name:multinode-20220602173558-283122-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.57s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (24.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 node start m03 --alsologtostderr
E0602 17:46:07.942841  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220602173558-283122 node start m03 --alsologtostderr: (23.927652011s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (24.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (102.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220602173558-283122
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20220602173558-283122
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20220602173558-283122: (22.803340187s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220602173558-283122 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220602173558-283122 --wait=true -v=8 --alsologtostderr: (1m19.816010096s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220602173558-283122
--- PASS: TestMultiNode/serial/RestartKeepsNodes (102.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220602173558-283122 node delete m03: (4.628708119s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.39s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 stop
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220602173558-283122 stop: (21.574696474s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220602173558-283122 status: exit status 7 (134.772914ms)

                                                
                                                
-- stdout --
	multinode-20220602173558-283122
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220602173558-283122-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220602173558-283122 status --alsologtostderr: exit status 7 (134.556764ms)

                                                
                                                
-- stdout --
	multinode-20220602173558-283122
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220602173558-283122-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0602 17:48:38.006463  410684 out.go:296] Setting OutFile to fd 1 ...
	I0602 17:48:38.006650  410684 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:48:38.006659  410684 out.go:309] Setting ErrFile to fd 2...
	I0602 17:48:38.006664  410684 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0602 17:48:38.006788  410684 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/bin
	I0602 17:48:38.006974  410684 out.go:303] Setting JSON to false
	I0602 17:48:38.006995  410684 mustload.go:65] Loading cluster: multinode-20220602173558-283122
	I0602 17:48:38.007351  410684 config.go:178] Loaded profile config "multinode-20220602173558-283122": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.6
	I0602 17:48:38.007368  410684 status.go:253] checking status of multinode-20220602173558-283122 ...
	I0602 17:48:38.007728  410684 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122 --format={{.State.Status}}
	I0602 17:48:38.041337  410684 status.go:328] multinode-20220602173558-283122 host status = "Stopped" (err=<nil>)
	I0602 17:48:38.041374  410684 status.go:341] host is not running, skipping remaining checks
	I0602 17:48:38.041385  410684 status.go:255] multinode-20220602173558-283122 status: &{Name:multinode-20220602173558-283122 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0602 17:48:38.041414  410684 status.go:253] checking status of multinode-20220602173558-283122-m02 ...
	I0602 17:48:38.041774  410684 cli_runner.go:164] Run: docker container inspect multinode-20220602173558-283122-m02 --format={{.State.Status}}
	I0602 17:48:38.075808  410684 status.go:328] multinode-20220602173558-283122-m02 host status = "Stopped" (err=<nil>)
	I0602 17:48:38.075838  410684 status.go:341] host is not running, skipping remaining checks
	I0602 17:48:38.075846  410684 status.go:255] multinode-20220602173558-283122-m02 status: &{Name:multinode-20220602173558-283122-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (59.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220602173558-283122 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0602 17:48:55.530651  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
E0602 17:49:33.160577  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220602173558-283122 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (58.645152803s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220602173558-283122 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (59.40s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (28.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220602173558-283122
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220602173558-283122-m02 --driver=docker  --container-runtime=docker
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20220602173558-283122-m02 --driver=docker  --container-runtime=docker: exit status 14 (89.083727ms)

                                                
                                                
-- stdout --
	* [multinode-20220602173558-283122-m02] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14269
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220602173558-283122-m02' is duplicated with machine name 'multinode-20220602173558-283122-m02' in profile 'multinode-20220602173558-283122'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220602173558-283122-m03 --driver=docker  --container-runtime=docker
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220602173558-283122-m03 --driver=docker  --container-runtime=docker: (25.510388203s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220602173558-283122
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20220602173558-283122: exit status 80 (375.689424ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220602173558-283122
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220602173558-283122-m03 already exists in multinode-20220602173558-283122-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20220602173558-283122-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20220602173558-283122-m03: (2.328795458s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (28.37s)

                                                
                                    
x
+
TestPreload (111.64s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220602175012-283122 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.0
E0602 17:50:56.205988  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
E0602 17:51:07.942349  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220602175012-283122 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.0: (1m16.644079491s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220602175012-283122 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220602175012-283122 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.3
E0602 17:51:58.578937  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220602175012-283122 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.3: (31.277485181s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220602175012-283122 -- docker images
helpers_test.go:175: Cleaning up "test-preload-20220602175012-283122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20220602175012-283122
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20220602175012-283122: (2.383912702s)
--- PASS: TestPreload (111.64s)

                                                
                                    
x
+
TestScheduledStopUnix (100.32s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20220602175204-283122 --memory=2048 --driver=docker  --container-runtime=docker
E0602 17:52:30.991121  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20220602175204-283122 --memory=2048 --driver=docker  --container-runtime=docker: (26.586553978s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220602175204-283122 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220602175204-283122 -n scheduled-stop-20220602175204-283122
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220602175204-283122 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220602175204-283122 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220602175204-283122 -n scheduled-stop-20220602175204-283122
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220602175204-283122
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220602175204-283122 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220602175204-283122
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20220602175204-283122: exit status 7 (103.65262ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220602175204-283122
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220602175204-283122 -n scheduled-stop-20220602175204-283122
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220602175204-283122 -n scheduled-stop-20220602175204-283122: exit status 7 (101.828582ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220602175204-283122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20220602175204-283122
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20220602175204-283122: (1.868686911s)
--- PASS: TestScheduledStopUnix (100.32s)

                                                
                                    
x
+
TestSkaffold (56.87s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1870787185 version
skaffold_test.go:63: skaffold version: v1.38.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-20220602175344-283122 --memory=2600 --driver=docker  --container-runtime=docker
E0602 17:53:55.531529  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-20220602175344-283122 --memory=2600 --driver=docker  --container-runtime=docker: (24.650426597s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:110: (dbg) Run:  /tmp/skaffold.exe1870787185 run --minikube-profile skaffold-20220602175344-283122 --kube-context skaffold-20220602175344-283122 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:110: (dbg) Done: /tmp/skaffold.exe1870787185 run --minikube-profile skaffold-20220602175344-283122 --kube-context skaffold-20220602175344-283122 --status-check=true --port-forward=false --interactive=false: (18.931105077s)
skaffold_test.go:116: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-5f9d4f7bbd-rdfdz" [a01241e2-dbf1-4e5d-b39d-f0dba50d89b4] Running
E0602 17:54:33.160995  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
skaffold_test.go:116: (dbg) TestSkaffold: app=leeroy-app healthy within 5.012338558s
skaffold_test.go:119: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-6995c84469-8zq4p" [be09a633-94cf-47d1-b006-e09bafe5df42] Running
skaffold_test.go:119: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006553364s
helpers_test.go:175: Cleaning up "skaffold-20220602175344-283122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-20220602175344-283122
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-20220602175344-283122: (2.544105589s)
--- PASS: TestSkaffold (56.87s)

                                                
                                    
x
+
TestInsufficientStorage (13.32s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20220602175441-283122 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20220602175441-283122 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.668630294s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7a58ca0b-b649-4f57-a9f3-e7e85c994b08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220602175441-283122] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d29f7c6e-ac00-42f8-a1a4-5044ebd45902","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14269"}}
	{"specversion":"1.0","id":"f6a067c9-73d3-4a4c-88f7-7fcaf4e799a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"406a45e1-7127-49cf-81bc-ef905052df14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig"}}
	{"specversion":"1.0","id":"608d724a-4d16-4a79-a4e6-d36d2f1a355d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube"}}
	{"specversion":"1.0","id":"30b6360c-fcfc-4a05-9d43-17e29450fd2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d83e7f46-463f-4540-aa32-58d937f051cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8bc4596e-69c6-409c-8899-a90ad4e79ac5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"cb14bec7-01cb-424a-a3d0-e5c135233bea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7db0a93c-e3e0-4576-93f9-eb43bbece65e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with the root privilege"}}
	{"specversion":"1.0","id":"367a26c7-590a-4192-8302-666b5b1eb327","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220602175441-283122 in cluster insufficient-storage-20220602175441-283122","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ebd07086-076a-4cf8-ac18-76cbe152df38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"49a937e3-112a-43a2-82e4-a20b7a667a4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"34870ded-f639-4f21-9e24-cf1deef34b7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220602175441-283122 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220602175441-283122 --output=json --layout=cluster: exit status 7 (373.140184ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220602175441-283122","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220602175441-283122","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0602 17:54:52.625136  444028 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220602175441-283122" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220602175441-283122 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220602175441-283122 --output=json --layout=cluster: exit status 7 (374.041064ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220602175441-283122","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220602175441-283122","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0602 17:54:52.999348  444137 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220602175441-283122" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	E0602 17:54:53.008498  444137 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/insufficient-storage-20220602175441-283122/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220602175441-283122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20220602175441-283122
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20220602175441-283122: (1.903730916s)
--- PASS: TestInsufficientStorage (13.32s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (66.99s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.9.0.12186706.exe start -p running-upgrade-20220602175639-283122 --memory=2200 --vm-driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.9.0.12186706.exe start -p running-upgrade-20220602175639-283122 --memory=2200 --vm-driver=docker  --container-runtime=docker: (39.622590487s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20220602175639-283122 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20220602175639-283122 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (24.450769817s)
helpers_test.go:175: Cleaning up "running-upgrade-20220602175639-283122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20220602175639-283122

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20220602175639-283122: (2.595878864s)
--- PASS: TestRunningBinaryUpgrade (66.99s)

                                                
                                    
x
+
TestKubernetesUpgrade (103.57s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220602175602-283122 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220602175602-283122 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (48.373472826s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220602175602-283122
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220602175602-283122: (11.361715442s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20220602175602-283122 status --format={{.Host}}

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20220602175602-283122 status --format={{.Host}}: exit status 7 (133.904173ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220602175602-283122 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220602175602-283122 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (23.174004931s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220602175602-283122 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220602175602-283122 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220602175602-283122 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (108.874784ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220602175602-283122] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14269
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.6 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220602175602-283122
	    minikube start -p kubernetes-upgrade-20220602175602-283122 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220602175602-2831222 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.6, by running:
	    
	    minikube start -p kubernetes-upgrade-20220602175602-283122 --kubernetes-version=v1.23.6
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220602175602-283122 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220602175602-283122 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (17.428060028s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220602175602-283122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220602175602-283122

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220602175602-283122: (2.909175577s)
--- PASS: TestKubernetesUpgrade (103.57s)

                                                
                                    
x
+
TestMissingContainerUpgrade (103.37s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.2397154068.exe start -p missing-upgrade-20220602175456-283122 --memory=2200 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.2397154068.exe start -p missing-upgrade-20220602175456-283122 --memory=2200 --driver=docker  --container-runtime=docker: (48.91709548s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220602175456-283122

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220602175456-283122: (10.414873691s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220602175456-283122
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20220602175456-283122 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20220602175456-283122 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (40.566185969s)
helpers_test.go:175: Cleaning up "missing-upgrade-20220602175456-283122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20220602175456-283122
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20220602175456-283122: (2.927907195s)
--- PASS: TestMissingContainerUpgrade (103.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220602175454-283122 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-20220602175454-283122 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (115.638846ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220602175454-283122] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14269
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (47.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220602175454-283122 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220602175454-283122 --driver=docker  --container-runtime=docker: (47.005002162s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220602175454-283122 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (47.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (15.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220602175454-283122 --no-kubernetes --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220602175454-283122 --no-kubernetes --driver=docker  --container-runtime=docker: (12.938898468s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220602175454-283122 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-20220602175454-283122 status -o json: exit status 2 (424.751789ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220602175454-283122","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-20220602175454-283122

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-20220602175454-283122: (2.404026867s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (15.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220602175454-283122 --no-kubernetes --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220602175454-283122 --no-kubernetes --driver=docker  --container-runtime=docker: (6.178055646s)
--- PASS: TestNoKubernetes/serial/Start (6.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220602175454-283122 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220602175454-283122 "sudo systemctl is-active --quiet service kubelet": exit status 1 (387.406049ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (4.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-20220602175454-283122
E0602 17:56:07.942629  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-20220602175454-283122: (4.244118887s)
--- PASS: TestNoKubernetes/serial/Stop (4.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220602175454-283122 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220602175454-283122 --driver=docker  --container-runtime=docker: (6.053117937s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220602175454-283122 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220602175454-283122 "sudo systemctl is-active --quiet service kubelet": exit status 1 (411.659888ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (70.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.9.0.353037975.exe start -p stopped-upgrade-20220602175618-283122 --memory=2200 --vm-driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.9.0.353037975.exe start -p stopped-upgrade-20220602175618-283122 --memory=2200 --vm-driver=docker  --container-runtime=docker: (43.481314928s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.9.0.353037975.exe -p stopped-upgrade-20220602175618-283122 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.9.0.353037975.exe -p stopped-upgrade-20220602175618-283122 stop: (2.419901746s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20220602175618-283122 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20220602175618-283122 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (24.293182482s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (70.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20220602175618-283122
version_upgrade_test.go:213: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-20220602175618-283122: (1.722933796s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.72s)

                                                
                                    
x
+
TestPause/serial/Start (55.21s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220602175733-283122 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220602175733-283122 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (55.205049538s)
--- PASS: TestPause/serial/Start (55.21s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.12s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220602175733-283122 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220602175733-283122 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (5.109632452s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.12s)

                                                
                                    
x
+
TestPause/serial/Pause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220602175733-283122 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.65s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20220602175733-283122 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20220602175733-283122 --output=json --layout=cluster: exit status 2 (440.8128ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20220602175733-283122","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.26.0-beta.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220602175733-283122","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.44s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20220602175733-283122 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.89s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20220602175733-283122 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.89s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.67s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20220602175733-283122 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20220602175733-283122 --alsologtostderr -v=5: (2.672501746s)
--- PASS: TestPause/serial/DeletePaused (2.67s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (2.48s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (2.372047083s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-20220602175733-283122
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-20220602175733-283122: exit status 1 (33.507319ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220602175733-283122

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (2.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (314.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220602175851-283122 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0602 17:58:55.530668  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220602175851-283122 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (5m14.309318053s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (314.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (49.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220602175916-283122 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6
E0602 17:59:29.023293  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602175344-283122/client.crt: no such file or directory
E0602 17:59:29.029256  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602175344-283122/client.crt: no such file or directory
E0602 17:59:29.039572  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602175344-283122/client.crt: no such file or directory
E0602 17:59:29.060077  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602175344-283122/client.crt: no such file or directory
E0602 17:59:29.100392  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602175344-283122/client.crt: no such file or directory
E0602 17:59:29.180743  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602175344-283122/client.crt: no such file or directory
E0602 17:59:29.341156  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602175344-283122/client.crt: no such file or directory
E0602 17:59:29.661674  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602175344-283122/client.crt: no such file or directory
E0602 17:59:30.302240  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602175344-283122/client.crt: no such file or directory
E0602 17:59:31.583356  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602175344-283122/client.crt: no such file or directory
E0602 17:59:33.160241  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
E0602 17:59:34.143534  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602175344-283122/client.crt: no such file or directory
E0602 17:59:39.264738  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602175344-283122/client.crt: no such file or directory
E0602 17:59:49.505662  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602175344-283122/client.crt: no such file or directory
start_stop_delete_test.go:188: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220602175916-283122 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6: (49.828149036s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (49.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220602175916-283122 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [c90aee8b-ddc0-4ff2-bb93-55a9a7770b93] Pending
helpers_test.go:342: "busybox" [c90aee8b-ddc0-4ff2-bb93-55a9a7770b93] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [c90aee8b-ddc0-4ff2-bb93-55a9a7770b93] Running
E0602 18:00:09.985995  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602175344-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.012574343s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220602175916-283122 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (289.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220602180014-283122 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220602180014-283122 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6: (4m49.190323669s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (289.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20220602175916-283122 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context no-preload-20220602175916-283122 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20220602175916-283122 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20220602175916-283122 --alsologtostderr -v=3: (11.013830221s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220602175916-283122 -n no-preload-20220602175916-283122
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220602175916-283122 -n no-preload-20220602175916-283122: exit status 7 (112.645195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20220602175916-283122 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (339.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220602175916-283122 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6
E0602 18:00:50.946200  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602175344-283122/client.crt: no such file or directory
E0602 18:01:07.942703  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220602175916-283122 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6: (5m38.78239053s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220602175916-283122 -n no-preload-20220602175916-283122
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (339.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (289.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220602180121-283122 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6
E0602 18:02:12.867056  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602175344-283122/client.crt: no such file or directory
E0602 18:03:55.531200  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220602180121-283122 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6: (4m49.836397791s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (289.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220602175851-283122 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [3af383fe-3049-47ed-a9cb-6bd82d22ebf9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [3af383fe-3049-47ed-a9cb-6bd82d22ebf9] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.012049244s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220602175851-283122 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20220602175851-283122 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context old-k8s-version-20220602175851-283122 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20220602175851-283122 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20220602175851-283122 --alsologtostderr -v=3: (10.926465212s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220602175851-283122 -n old-k8s-version-20220602175851-283122
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220602175851-283122 -n old-k8s-version-20220602175851-283122: exit status 7 (107.700321ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20220602175851-283122 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (599.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220602175851-283122 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0602 18:04:29.022717  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602175344-283122/client.crt: no such file or directory
E0602 18:04:33.160426  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
E0602 18:04:56.708124  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602175344-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220602175851-283122 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (9m58.939675293s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220602175851-283122 -n old-k8s-version-20220602175851-283122
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (599.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220602180014-283122 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [555effd3-dee8-4369-a122-65f0d4b91c9d] Pending
helpers_test.go:342: "busybox" [555effd3-dee8-4369-a122-65f0d4b91c9d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [555effd3-dee8-4369-a122-65f0d4b91c9d] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.011764508s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220602180014-283122 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20220602180014-283122 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context embed-certs-20220602180014-283122 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20220602180014-283122 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20220602180014-283122 --alsologtostderr -v=3: (10.875308735s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220602180014-283122 -n embed-certs-20220602180014-283122
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220602180014-283122 -n embed-certs-20220602180014-283122: exit status 7 (104.624404ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20220602180014-283122 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (577.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220602180014-283122 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220602180014-283122 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6: (9m37.207685597s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220602180014-283122 -n embed-certs-20220602180014-283122
E0602 18:15:01.191568  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/cilium-20220602175747-283122/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (577.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-cd7c84bfc-qdplt" [72bdc922-ae76-4c1f-a945-fe133981acf8] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0602 18:06:07.942864  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-cd7c84bfc-qdplt" [72bdc922-ae76-4c1f-a945-fe133981acf8] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.013095783s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220602180121-283122 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [a49a6bb7-bb49-466f-b590-a49f8fc18763] Pending
helpers_test.go:342: "busybox" [a49a6bb7-bb49-466f-b590-a49f8fc18763] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [a49a6bb7-bb49-466f-b590-a49f8fc18763] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 8.012289485s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220602180121-283122 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20220602180121-283122 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context default-k8s-different-port-20220602180121-283122 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-cd7c84bfc-qdplt" [72bdc922-ae76-4c1f-a945-fe133981acf8] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009093496s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context no-preload-20220602175916-283122 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (11.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20220602180121-283122 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20220602180121-283122 --alsologtostderr -v=3: (11.053120804s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (11.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20220602175916-283122 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20220602175916-283122 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220602175916-283122 -n no-preload-20220602175916-283122
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220602175916-283122 -n no-preload-20220602175916-283122: exit status 2 (425.166696ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220602175916-283122 -n no-preload-20220602175916-283122
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220602175916-283122 -n no-preload-20220602175916-283122: exit status 2 (415.675462ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-20220602175916-283122 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220602175916-283122 -n no-preload-20220602175916-283122
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220602175916-283122 -n no-preload-20220602175916-283122
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220602180121-283122 -n default-k8s-different-port-20220602180121-283122
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220602180121-283122 -n default-k8s-different-port-20220602180121-283122: exit status 7 (110.54664ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20220602180121-283122 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220602180632-283122 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220602180632-283122 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6: (41.968194362s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (289.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20220602175746-283122 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p auto-20220602175746-283122 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker: (4m49.177699826s)
--- PASS: TestNetworkPlugins/group/auto/Start (289.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20220602180632-283122 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20220602180632-283122 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20220602180632-283122 --alsologtostderr -v=3: (11.031466972s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220602180632-283122 -n newest-cni-20220602180632-283122
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220602180632-283122 -n newest-cni-20220602180632-283122: exit status 7 (111.82151ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20220602180632-283122 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (20.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220602180632-283122 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6
E0602 18:07:36.206242  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
start_stop_delete_test.go:258: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220602180632-283122 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.23.6: (20.261805574s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220602180632-283122 -n newest-cni-20220602180632-283122
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (20.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20220602180632-283122 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20220602180632-283122 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220602180632-283122 -n newest-cni-20220602180632-283122
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220602180632-283122 -n newest-cni-20220602180632-283122: exit status 2 (419.855795ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220602180632-283122 -n newest-cni-20220602180632-283122
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220602180632-283122 -n newest-cni-20220602180632-283122: exit status 2 (421.001768ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20220602180632-283122 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220602180632-283122 -n newest-cni-20220602180632-283122
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220602180632-283122 -n newest-cni-20220602180632-283122
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (86.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20220602175747-283122 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker
E0602 18:08:38.579396  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
E0602 18:08:55.531232  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/addons-20220602171222-283122/client.crt: no such file or directory
E0602 18:09:10.991978  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/ingress-addon-legacy-20220602173000-283122/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20220602175747-283122 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker: (1m26.218784914s)
--- PASS: TestNetworkPlugins/group/cilium/Start (86.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-gr6hn" [d206c6eb-bae7-4383-bbf9-a9f14c4b265c] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.015611435s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20220602175747-283122 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (12.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-20220602175747-283122 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-ht8zs" [797125e2-20f8-4e1a-a69d-8a66640f089f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0602 18:09:29.023265  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/skaffold-20220602175344-283122/client.crt: no such file or directory
helpers_test.go:342: "netcat-668db85669-ht8zs" [797125e2-20f8-4e1a-a69d-8a66640f089f] Running
E0602 18:09:33.161065  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 11.023532856s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (12.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-20220602175747-283122 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-20220602175747-283122 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-20220602175747-283122 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (60.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20220602175747-283122 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker
E0602 18:10:06.658341  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602175916-283122/client.crt: no such file or directory
E0602 18:10:06.663645  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602175916-283122/client.crt: no such file or directory
E0602 18:10:06.673988  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602175916-283122/client.crt: no such file or directory
E0602 18:10:06.694346  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602175916-283122/client.crt: no such file or directory
E0602 18:10:06.734686  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602175916-283122/client.crt: no such file or directory
E0602 18:10:06.815056  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602175916-283122/client.crt: no such file or directory
E0602 18:10:06.975464  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602175916-283122/client.crt: no such file or directory
E0602 18:10:07.296098  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602175916-283122/client.crt: no such file or directory
E0602 18:10:07.937072  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602175916-283122/client.crt: no such file or directory
E0602 18:10:09.217720  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602175916-283122/client.crt: no such file or directory
E0602 18:10:11.778297  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602175916-283122/client.crt: no such file or directory
E0602 18:10:16.899110  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602175916-283122/client.crt: no such file or directory
E0602 18:10:27.139527  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602175916-283122/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p calico-20220602175747-283122 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker: (1m0.790283504s)
--- PASS: TestNetworkPlugins/group/calico/Start (60.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-mcsb4" [85dcab72-b876-4f35-8d6e-2e79a9a79849] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.015501997s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-20220602175747-283122 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context calico-20220602175747-283122 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-rrd7h" [d13d038e-4e67-472e-a8dd-be684b30738e] Pending
E0602 18:10:47.620215  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602175916-283122/client.crt: no such file or directory
helpers_test.go:342: "netcat-668db85669-rrd7h" [d13d038e-4e67-472e-a8dd-be684b30738e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-rrd7h" [d13d038e-4e67-472e-a8dd-be684b30738e] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.006625474s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20220602175746-283122 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220602175746-283122 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-6sf6w" [9d69017a-ef80-4f33-add7-cfeaab83d5a5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-6sf6w" [9d69017a-ef80-4f33-add7-cfeaab83d5a5] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.006987602s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-958c5c65f-htqtd" [991274ad-f815-4617-bc1a-081eb0dda8a7] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0602 18:14:25.349208  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/cilium-20220602175747-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013717219s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-958c5c65f-htqtd" [991274ad-f815-4617-bc1a-081eb0dda8a7] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0602 18:14:30.470254  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/cilium-20220602175747-283122/client.crt: no such file or directory
E0602 18:14:33.161242  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/functional-20220602171905-283122/client.crt: no such file or directory
start_stop_delete_test.go:289: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006298451s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context old-k8s-version-20220602175851-283122 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20220602175851-283122 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20220602175851-283122 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220602175851-283122 -n old-k8s-version-20220602175851-283122
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220602175851-283122 -n old-k8s-version-20220602175851-283122: exit status 2 (432.67362ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220602175851-283122 -n old-k8s-version-20220602175851-283122
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220602175851-283122 -n old-k8s-version-20220602175851-283122: exit status 2 (430.059274ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20220602175851-283122 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220602175851-283122 -n old-k8s-version-20220602175851-283122
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220602175851-283122 -n old-k8s-version-20220602175851-283122
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (44.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p false-20220602175747-283122 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p false-20220602175747-283122 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker: (44.088841524s)
--- PASS: TestNetworkPlugins/group/false/Start (44.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-cd7c84bfc-7nrmg" [85270f8a-98b9-4a5d-89aa-d8b50d32b427] Running
helpers_test.go:342: "kubernetes-dashboard-cd7c84bfc-7nrmg" [85270f8a-98b9-4a5d-89aa-d8b50d32b427] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016527745s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-cd7c84bfc-7nrmg" [85270f8a-98b9-4a5d-89aa-d8b50d32b427] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0602 18:15:06.658174  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602175916-283122/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00813682s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context embed-certs-20220602180014-283122 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20220602180014-283122 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20220602180014-283122 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220602180014-283122 -n embed-certs-20220602180014-283122

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220602180014-283122 -n embed-certs-20220602180014-283122: exit status 2 (448.885225ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220602180014-283122 -n embed-certs-20220602180014-283122
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220602180014-283122 -n embed-certs-20220602180014-283122: exit status 2 (429.260427ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20220602180014-283122 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220602180014-283122 -n embed-certs-20220602180014-283122
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220602180014-283122 -n embed-certs-20220602180014-283122
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-20220602175747-283122 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-20220602175747-283122 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-x7mdx" [12aeadf4-9634-471b-98b4-52166f4b2692] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
helpers_test.go:342: "netcat-668db85669-x7mdx" [12aeadf4-9634-471b-98b4-52166f4b2692] Running
E0602 18:15:34.342257  283122 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-14269-279761-ab7bb61b313d0ba57acd833ecb833795c1bc5389/.minikube/profiles/no-preload-20220602175916-283122/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.00730821s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (50.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20220602175746-283122 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20220602175746-283122 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker: (50.267847873s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (50.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20220602175746-283122 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220602175746-283122 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-mghwt" [290b7970-e6ce-4e6c-a801-a2bea04c0119] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-mghwt" [290b7970-e6ce-4e6c-a801-a2bea04c0119] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.006055735s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220602175746-283122 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-20220602175746-283122 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-20220602175746-283122 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (43.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20220602175746-283122 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20220602175746-283122 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker: (43.158789184s)
--- PASS: TestNetworkPlugins/group/bridge/Start (43.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (40s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-20220602175746-283122 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-20220602175746-283122 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (39.99787227s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (40.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20220602175746-283122 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220602175746-283122 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-5kx7f" [8489e8b0-0217-47cf-ab81-3d1c9b4aef24] Pending
helpers_test.go:342: "netcat-668db85669-5kx7f" [8489e8b0-0217-47cf-ab81-3d1c9b4aef24] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-5kx7f" [8489e8b0-0217-47cf-ab81-3d1c9b4aef24] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.018781012s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-20220602175746-283122 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-20220602175746-283122 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-trz2w" [1b65bd67-bbcf-4354-bd17-04b6ae56880e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:342: "netcat-668db85669-trz2w" [1b65bd67-bbcf-4354-bd17-04b6ae56880e] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.007274966s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.19s)

                                                
                                    

Test skip (19/278)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.6/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.6/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.6/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:448: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:105: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220602180120-283122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20220602180120-283122
--- SKIP: TestStartStop/group/disable-driver-mounts (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220602175746-283122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20220602175746-283122
--- SKIP: TestNetworkPlugins/group/flannel (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220602175747-283122" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-flannel-20220602175747-283122
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.43s)

                                                
                                    
Copied to clipboard