Test Report: Docker_macOS 13173

                    
                      7fa4ce093861046ed4d109975b74ec5f157758ca:2021-12-14:21803
                    
                

Test fail (8/226)

x
+
TestDownloadOnly/v1.16.0/preload-exists (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
aaa_download_only_test.go:105: failed to verify preloaded tarball file exists: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.16.0-docker-overlay2-amd64.tar.lz4: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/preload-exists (0.18s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.03s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:735: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20211214191315-1964 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1214 19:15:59.668382    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
functional_test.go:735: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20211214191315-1964 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (27.624716316s)

                                                
                                                
-- stdout --
	* [functional-20211214191315-1964] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13173
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node functional-20211214191315-1964 in cluster functional-20211214191315-1964
	* Pulling base image ...
	* Updating the running docker "functional-20211214191315-1964" container ...
	* Preparing Kubernetes v1.22.4 on Docker 20.10.11 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1214 19:16:00.027425    3861 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211214191315-1964" hosting pod "etcd-functional-20211214191315-1964" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211214191315-1964" has status "Ready":"False"
	E1214 19:16:00.031893    3861 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211214191315-1964" hosting pod "kube-apiserver-functional-20211214191315-1964" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211214191315-1964" has status "Ready":"False"
	E1214 19:16:00.036899    3861 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211214191315-1964" hosting pod "kube-controller-manager-functional-20211214191315-1964" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211214191315-1964" has status "Ready":"False"
	E1214 19:16:00.348041    3861 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211214191315-1964" hosting pod "kube-proxy-gxjj9" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211214191315-1964" has status "Ready":"False"
	E1214 19:16:00.741459    3861 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211214191315-1964" hosting pod "kube-scheduler-functional-20211214191315-1964" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211214191315-1964" has status "Ready":"False"
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-20211214191315-1964": Get "https://127.0.0.1:52226/api/v1/nodes/functional-20211214191315-1964": EOF
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:737: failed to restart minikube. args "out/minikube-darwin-amd64 start -p functional-20211214191315-1964 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:739: restart took 27.62504794s for "functional-20211214191315-1964" cluster.
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect functional-20211214191315-1964
helpers_test.go:236: (dbg) docker inspect functional-20211214191315-1964:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "063d382c672d09e0251ddbbea7e2717b5981f7dcc7111ce59695ed33d31998c1",
	        "Created": "2021-12-15T03:13:27.603409374Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 36751,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-12-15T03:13:36.628247362Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:25dd136318bb473aa79e4b6546ab850b776602a93c650a0488bd51f9337f57cc",
	        "ResolvConfPath": "/var/lib/docker/containers/063d382c672d09e0251ddbbea7e2717b5981f7dcc7111ce59695ed33d31998c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/063d382c672d09e0251ddbbea7e2717b5981f7dcc7111ce59695ed33d31998c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/063d382c672d09e0251ddbbea7e2717b5981f7dcc7111ce59695ed33d31998c1/hosts",
	        "LogPath": "/var/lib/docker/containers/063d382c672d09e0251ddbbea7e2717b5981f7dcc7111ce59695ed33d31998c1/063d382c672d09e0251ddbbea7e2717b5981f7dcc7111ce59695ed33d31998c1-json.log",
	        "Name": "/functional-20211214191315-1964",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20211214191315-1964:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20211214191315-1964",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/de7b6edd911d2cbffaabb6b928787a944c049e957c86fd228fec6c5d6fbe8a2d-init/diff:/var/lib/docker/overlay2/d65d6ee199d533962e560eb1009b9421c35530981cd2b3a5c895e7aa29133ef8/diff:/var/lib/docker/overlay2/cf5813d5f780adc966435adb87967f9ac9286e5cca19b6bffc7faf37a1488181/diff:/var/lib/docker/overlay2/bbf7d73443b3c1f6753765c62119ac0cb4533734f58ea785a07e83bd45fd609b/diff:/var/lib/docker/overlay2/7d3cc6311f7e53b5fe29c271d1dda738d6a6fd4d81dd72b820fa40b56493c076/diff:/var/lib/docker/overlay2/e59bd0b69cd268058ccffb5fb6a21ada51440450dc39f46af8d6ae6a0b58bb88/diff:/var/lib/docker/overlay2/2801fff88f8794874e8eedcda08e1e51151070b8b3446abc9bb5e92a0995b5f9/diff:/var/lib/docker/overlay2/d62b1160868c64ec1baf9e63759f5726132676409fde8648705e09807185e554/diff:/var/lib/docker/overlay2/4cca4926b4249a86bdf8415cf6388c243dde6ce324e7d3ca1e99bfd0b003ad11/diff:/var/lib/docker/overlay2/15154e198d38ee378134c98df7652e12a01ac6a18a3ebac07bac22dde0f70cb2/diff:/var/lib/docker/overlay2/34f5aa
f154bad24983cab59326bd1f9afebc4a0faede5465d9d049edd03752ef/diff:/var/lib/docker/overlay2/d93805ff58045a7635ffb94b9d6b0e525969424603b68bdfe232872070622e29/diff:/var/lib/docker/overlay2/1a84b556261df2d3a163c7dbb1020f88045caf924509e24e2d820ba5d8780b37/diff:/var/lib/docker/overlay2/140a3ef1453300c149d79ee7cb6cbd101478204fafa6f798ead4ff62cdedcea9/diff:/var/lib/docker/overlay2/18ed14d5017caa3b45068784dd46b626a571fbb85523a3dc0961c19e6fa9b052/diff:/var/lib/docker/overlay2/d41f951f52167ee3e8a1479a3e22b5fe6c67cd78372274d636ea1385fbcaff45/diff:/var/lib/docker/overlay2/8d06d7c23fcac42d6d4375c4df7b5cb42aa7dda7c109cd79784ed629e2b99162/diff:/var/lib/docker/overlay2/f7f73d07aef9e7d1cd8d512c0c87187a7c1716177c6b4abb38f7b672fa78bfd7/diff:/var/lib/docker/overlay2/48fbe6c22b155794cba062ea1054747e89ddafdd693d4b0613e11fa7c1078bcb/diff:/var/lib/docker/overlay2/bc52c503a14b163785b1d3d507f12ed2bd756bb56bf0e69cc95abeb09564b53f/diff:/var/lib/docker/overlay2/c7505994e09c79aba0a61d1035017d2c4146a8145da48c793a8b34038e28eba3/diff:/var/lib/d
ocker/overlay2/7c877c41fb0dc251309eeed242ef1d07aab3ce445f8a00bd511510bafadf6190/diff:/var/lib/docker/overlay2/476a1804d5d4872206e789f62720a282f86e8a147ff27052295111051b445bc6/diff:/var/lib/docker/overlay2/9b3dd64375f79358e4a247638ba3b21d75fa9574ea58ae70540c7f8a40c6c7e0/diff:/var/lib/docker/overlay2/5764bedae373ccbfd87a2d93dc8f05063553bbc56f8702001e93a71126611160/diff:/var/lib/docker/overlay2/b1d980593cf4766dd03426b96cea1461ecb9df0ab9e9b0dcb0707f52aca17667/diff:/var/lib/docker/overlay2/f80ab1026afe4bfff3af056bb301eefbdd80b45d07b1ec69d47518d8929ad293/diff:/var/lib/docker/overlay2/6aca893938f71b11291705caf4db29dac4047a08ea7dbe7a890f541acf583c7e/diff:/var/lib/docker/overlay2/2be1be3d25f65a42cfa382d19c19591c016767d1b0e981438e1bc1c1eccbbb26/diff:/var/lib/docker/overlay2/37dc1176e9195ffb3c40ecbbae26ec9cd8f8e10ddb002abe3d8b4ec32429c0cc/diff:/var/lib/docker/overlay2/fb83115e15a1ac8e98ae386d5b286c6bb9c33b19c88c3ee2cb6e56cbfced9101/diff:/var/lib/docker/overlay2/a0a75022c882a65c8f12d99327ab1b4774935514c4100c986b40f26dfa6
57cbc/diff:/var/lib/docker/overlay2/c3d5a8cd2f734ba495b40e924f240b86adbf7127ffcc37a7b46d074f326a92b2/diff:/var/lib/docker/overlay2/e8a4a96d1a43dc42113e8db34f203842db67806cbd8a4c9d4eb308f177e0a639/diff:/var/lib/docker/overlay2/79a5cf9e15648205b381d71c57af6afa5ed3657072b7fd68a7dfaf384165fb88/diff:/var/lib/docker/overlay2/8c3eadf3bf1e807a2fe0d03a3f3074ae928cdc951a7f8e5f12a64cf65a7af79d/diff:/var/lib/docker/overlay2/5bcc190ed22527c90283b3c8a56ecf82ab1af2d24ce87fe599b4b2ac4ec7d9d5/diff:/var/lib/docker/overlay2/a5009febf0a368117ee8e9860233cfe0424c46701611b76625bb6b423cba86ac/diff:/var/lib/docker/overlay2/1ac60e0a0574910c05a20192d5988606665d3101ced6bdbc31b7660cd8431283/diff:/var/lib/docker/overlay2/c6cdf5fd609878026154660951e80c9c6bc61a49cd2d889fbdccea6c8c36d474/diff:/var/lib/docker/overlay2/46c08365e5d94e0fcaca61e53b0d880b1b42b9c1387136f352318dca068deef3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de7b6edd911d2cbffaabb6b928787a944c049e957c86fd228fec6c5d6fbe8a2d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de7b6edd911d2cbffaabb6b928787a944c049e957c86fd228fec6c5d6fbe8a2d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de7b6edd911d2cbffaabb6b928787a944c049e957c86fd228fec6c5d6fbe8a2d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-20211214191315-1964",
	                "Source": "/var/lib/docker/volumes/functional-20211214191315-1964/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20211214191315-1964",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20211214191315-1964",
	                "name.minikube.sigs.k8s.io": "functional-20211214191315-1964",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3fb97efd3b1452cfe5e5eb7721362081e1f4c664ecf3cbfd167fbe642693a5ef",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52227"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52228"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52229"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52230"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52226"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3fb97efd3b14",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20211214191315-1964": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "063d382c672d",
	                        "functional-20211214191315-1964"
	                    ],
	                    "NetworkID": "65ae89935c9b6854d4e470dffb3ea0766ae64e3661db1cb2b0a74f255d5cd0c7",
	                    "EndpointID": "0739de465c186d772feba1f3bc4ad525c49028697d0b88cd5f7db71c0a76262a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211214191315-1964 -n functional-20211214191315-1964
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211214191315-1964 -n functional-20211214191315-1964: exit status 2 (622.151638ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211214191315-1964 logs -n 25: (3.005459348s)
helpers_test.go:253: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------|--------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                                    Args                                     |            Profile             |  User   | Version |          Start Time           |           End Time            |
	|---------|-----------------------------------------------------------------------------|--------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | nospam-20211214191129-1964 --log_dir                                        | nospam-20211214191129-1964     | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:12:48 PST | Tue, 14 Dec 2021 19:12:48 PST |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 |                                |         |         |                               |                               |
	|         | pause                                                                       |                                |         |         |                               |                               |
	| -p      | nospam-20211214191129-1964 --log_dir                                        | nospam-20211214191129-1964     | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:12:48 PST | Tue, 14 Dec 2021 19:12:49 PST |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 |                                |         |         |                               |                               |
	|         | unpause                                                                     |                                |         |         |                               |                               |
	| -p      | nospam-20211214191129-1964 --log_dir                                        | nospam-20211214191129-1964     | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:12:49 PST | Tue, 14 Dec 2021 19:12:50 PST |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 |                                |         |         |                               |                               |
	|         | unpause                                                                     |                                |         |         |                               |                               |
	| -p      | nospam-20211214191129-1964 --log_dir                                        | nospam-20211214191129-1964     | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:12:50 PST | Tue, 14 Dec 2021 19:12:50 PST |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 |                                |         |         |                               |                               |
	|         | unpause                                                                     |                                |         |         |                               |                               |
	| -p      | nospam-20211214191129-1964 --log_dir                                        | nospam-20211214191129-1964     | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:12:50 PST | Tue, 14 Dec 2021 19:13:08 PST |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 |                                |         |         |                               |                               |
	|         | stop                                                                        |                                |         |         |                               |                               |
	| -p      | nospam-20211214191129-1964 --log_dir                                        | nospam-20211214191129-1964     | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:13:08 PST | Tue, 14 Dec 2021 19:13:08 PST |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 |                                |         |         |                               |                               |
	|         | stop                                                                        |                                |         |         |                               |                               |
	| -p      | nospam-20211214191129-1964 --log_dir                                        | nospam-20211214191129-1964     | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:13:08 PST | Tue, 14 Dec 2021 19:13:08 PST |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 |                                |         |         |                               |                               |
	|         | stop                                                                        |                                |         |         |                               |                               |
	| delete  | -p nospam-20211214191129-1964                                               | nospam-20211214191129-1964     | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:13:09 PST | Tue, 14 Dec 2021 19:13:15 PST |
	| start   | -p                                                                          | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:13:15 PST | Tue, 14 Dec 2021 19:15:19 PST |
	|         | functional-20211214191315-1964                                              |                                |         |         |                               |                               |
	|         | --memory=4000                                                               |                                |         |         |                               |                               |
	|         | --apiserver-port=8441                                                       |                                |         |         |                               |                               |
	|         | --wait=all --driver=docker                                                  |                                |         |         |                               |                               |
	| start   | -p                                                                          | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:19 PST | Tue, 14 Dec 2021 19:15:26 PST |
	|         | functional-20211214191315-1964                                              |                                |         |         |                               |                               |
	|         | --alsologtostderr -v=8                                                      |                                |         |         |                               |                               |
	| -p      | functional-20211214191315-1964                                              | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:28 PST | Tue, 14 Dec 2021 19:15:30 PST |
	|         | cache add k8s.gcr.io/pause:3.1                                              |                                |         |         |                               |                               |
	| -p      | functional-20211214191315-1964                                              | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:30 PST | Tue, 14 Dec 2021 19:15:34 PST |
	|         | cache add k8s.gcr.io/pause:3.3                                              |                                |         |         |                               |                               |
	| -p      | functional-20211214191315-1964                                              | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:34 PST | Tue, 14 Dec 2021 19:15:37 PST |
	|         | cache add                                                                   |                                |         |         |                               |                               |
	|         | k8s.gcr.io/pause:latest                                                     |                                |         |         |                               |                               |
	| -p      | functional-20211214191315-1964 cache add                                    | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:38 PST | Tue, 14 Dec 2021 19:15:39 PST |
	|         | minikube-local-cache-test:functional-20211214191315-1964                    |                                |         |         |                               |                               |
	| -p      | functional-20211214191315-1964 cache delete                                 | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:39 PST | Tue, 14 Dec 2021 19:15:39 PST |
	|         | minikube-local-cache-test:functional-20211214191315-1964                    |                                |         |         |                               |                               |
	| cache   | delete k8s.gcr.io/pause:3.3                                                 | minikube                       | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:40 PST | Tue, 14 Dec 2021 19:15:40 PST |
	| cache   | list                                                                        | minikube                       | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:40 PST | Tue, 14 Dec 2021 19:15:40 PST |
	| -p      | functional-20211214191315-1964                                              | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:40 PST | Tue, 14 Dec 2021 19:15:40 PST |
	|         | ssh sudo crictl images                                                      |                                |         |         |                               |                               |
	| -p      | functional-20211214191315-1964                                              | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:40 PST | Tue, 14 Dec 2021 19:15:41 PST |
	|         | ssh sudo docker rmi                                                         |                                |         |         |                               |                               |
	|         | k8s.gcr.io/pause:latest                                                     |                                |         |         |                               |                               |
	| -p      | functional-20211214191315-1964                                              | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:42 PST | Tue, 14 Dec 2021 19:15:44 PST |
	|         | cache reload                                                                |                                |         |         |                               |                               |
	| -p      | functional-20211214191315-1964                                              | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:44 PST | Tue, 14 Dec 2021 19:15:44 PST |
	|         | ssh sudo crictl inspecti                                                    |                                |         |         |                               |                               |
	|         | k8s.gcr.io/pause:latest                                                     |                                |         |         |                               |                               |
	| cache   | delete k8s.gcr.io/pause:3.1                                                 | minikube                       | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:44 PST | Tue, 14 Dec 2021 19:15:44 PST |
	| cache   | delete k8s.gcr.io/pause:latest                                              | minikube                       | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:44 PST | Tue, 14 Dec 2021 19:15:44 PST |
	| -p      | functional-20211214191315-1964                                              | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:45 PST | Tue, 14 Dec 2021 19:15:45 PST |
	|         | kubectl -- --context                                                        |                                |         |         |                               |                               |
	|         | functional-20211214191315-1964                                              |                                |         |         |                               |                               |
	|         | get pods                                                                    |                                |         |         |                               |                               |
	| kubectl | --profile=functional-20211214191315-1964                                    | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:45 PST | Tue, 14 Dec 2021 19:15:45 PST |
	|         | -- --context                                                                |                                |         |         |                               |                               |
	|         | functional-20211214191315-1964 get pods                                     |                                |         |         |                               |                               |
	|---------|-----------------------------------------------------------------------------|--------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/14 19:15:46
	Running on machine: 37309
	Binary: Built with gc go1.17.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1214 19:15:46.489738    3861 out.go:297] Setting OutFile to fd 1 ...
	I1214 19:15:46.489863    3861 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1214 19:15:46.489866    3861 out.go:310] Setting ErrFile to fd 2...
	I1214 19:15:46.489868    3861 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1214 19:15:46.489939    3861 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/bin
	I1214 19:15:46.490185    3861 out.go:304] Setting JSON to false
	I1214 19:15:46.514010    3861 start.go:112] hostinfo: {"hostname":"37309.local","uptime":922,"bootTime":1639537224,"procs":316,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1214 19:15:46.514091    3861 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1214 19:15:46.541174    3861 out.go:176] * [functional-20211214191315-1964] minikube v1.24.0 on Darwin 11.2.3
	I1214 19:15:46.541353    3861 notify.go:174] Checking for updates...
	I1214 19:15:46.589761    3861 out.go:176]   - MINIKUBE_LOCATION=13173
	I1214 19:15:46.622946    3861 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig
	I1214 19:15:46.648157    3861 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1214 19:15:46.674324    3861 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube
	I1214 19:15:46.674728    3861 config.go:176] Loaded profile config "functional-20211214191315-1964": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4
	I1214 19:15:46.674762    3861 driver.go:344] Setting default libvirt URI to qemu:///system
	I1214 19:15:46.770775    3861 docker.go:132] docker version: linux-20.10.6
	I1214 19:15:46.770915    3861 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1214 19:15:46.952052    3861 info.go:263] docker info: {ID:5AO3:Q7BV:QPO2:IORE:2FWE:BSI4:OSEF:34WA:NLU4:XM3Q:JID7:HR3K Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:53 SystemTime:2021-12-15 03:15:46.880401842 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1214 19:15:47.000605    3861 out.go:176] * Using the docker driver based on existing profile
	I1214 19:15:47.000656    3861 start.go:280] selected driver: docker
	I1214 19:15:47.000664    3861 start.go:795] validating driver "docker" against &{Name:functional-20211214191315-1964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4 ClusterName:functional-20211214191315-1964 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.4 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddon
Images:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1214 19:15:47.000799    3861 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1214 19:15:47.001188    3861 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1214 19:15:47.187824    3861 info.go:263] docker info: {ID:5AO3:Q7BV:QPO2:IORE:2FWE:BSI4:OSEF:34WA:NLU4:XM3Q:JID7:HR3K Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:53 SystemTime:2021-12-15 03:15:47.114916447 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1214 19:15:47.189994    3861 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1214 19:15:47.190026    3861 cni.go:93] Creating CNI manager for ""
	I1214 19:15:47.190033    3861 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1214 19:15:47.190044    3861 start_flags.go:298] config:
	{Name:functional-20211214191315-1964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4 ClusterName:functional-20211214191315-1964 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.4 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonI
mages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1214 19:15:47.236508    3861 out.go:176] * Starting control plane node functional-20211214191315-1964 in cluster functional-20211214191315-1964
	I1214 19:15:47.236656    3861 cache.go:118] Beginning downloading kic base image for docker with docker
	I1214 19:15:47.262553    3861 out.go:176] * Pulling base image ...
	I1214 19:15:47.262649    3861 preload.go:132] Checking if preload exists for k8s version v1.22.4 and runtime docker
	I1214 19:15:47.262668    3861 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab in local docker daemon
	I1214 19:15:47.262728    3861 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.22.4-docker-overlay2-amd64.tar.lz4
	I1214 19:15:47.262748    3861 cache.go:57] Caching tarball of preloaded images
	I1214 19:15:47.262970    3861 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.22.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1214 19:15:47.262986    3861 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.4 on docker
	I1214 19:15:47.264094    3861 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/config.json ...
	I1214 19:15:47.379931    3861 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab in local docker daemon, skipping pull
	I1214 19:15:47.379943    3861 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab exists in daemon, skipping load
	I1214 19:15:47.379953    3861 cache.go:206] Successfully downloaded all kic artifacts
	I1214 19:15:47.379988    3861 start.go:313] acquiring machines lock for functional-20211214191315-1964: {Name:mk594d6742213ef916a69c00c22eea3f8bde6474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1214 19:15:47.380072    3861 start.go:317] acquired machines lock for "functional-20211214191315-1964" in 68.787µs
	I1214 19:15:47.380095    3861 start.go:93] Skipping create...Using existing machine configuration
	I1214 19:15:47.380101    3861 fix.go:55] fixHost starting: 
	I1214 19:15:47.380343    3861 cli_runner.go:115] Run: docker container inspect functional-20211214191315-1964 --format={{.State.Status}}
	I1214 19:15:47.495263    3861 fix.go:108] recreateIfNeeded on functional-20211214191315-1964: state=Running err=<nil>
	W1214 19:15:47.495308    3861 fix.go:134] unexpected machine state, will restart: <nil>
	I1214 19:15:47.523096    3861 out.go:176] * Updating the running docker "functional-20211214191315-1964" container ...
	I1214 19:15:47.523125    3861 machine.go:88] provisioning docker machine ...
	I1214 19:15:47.523151    3861 ubuntu.go:169] provisioning hostname "functional-20211214191315-1964"
	I1214 19:15:47.523256    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:15:47.637934    3861 main.go:130] libmachine: Using SSH client type: native
	I1214 19:15:47.638115    3861 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397120] 0x139a200 <nil>  [] 0s} 127.0.0.1 52227 <nil> <nil>}
	I1214 19:15:47.638126    3861 main.go:130] libmachine: About to run SSH command:
	sudo hostname functional-20211214191315-1964 && echo "functional-20211214191315-1964" | sudo tee /etc/hostname
	I1214 19:15:47.773226    3861 main.go:130] libmachine: SSH cmd err, output: <nil>: functional-20211214191315-1964
	
	I1214 19:15:47.773342    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:15:47.890603    3861 main.go:130] libmachine: Using SSH client type: native
	I1214 19:15:47.890754    3861 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397120] 0x139a200 <nil>  [] 0s} 127.0.0.1 52227 <nil> <nil>}
	I1214 19:15:47.890765    3861 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-20211214191315-1964' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-20211214191315-1964/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-20211214191315-1964' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1214 19:15:48.017844    3861 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1214 19:15:48.017858    3861 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube}
	I1214 19:15:48.017888    3861 ubuntu.go:177] setting up certificates
	I1214 19:15:48.017897    3861 provision.go:83] configureAuth start
	I1214 19:15:48.017992    3861 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20211214191315-1964
	I1214 19:15:48.133762    3861 provision.go:138] copyHostCerts
	I1214 19:15:48.133850    3861 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.pem, removing ...
	I1214 19:15:48.133857    3861 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.pem
	I1214 19:15:48.133981    3861 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.pem (1078 bytes)
	I1214 19:15:48.134199    3861 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cert.pem, removing ...
	I1214 19:15:48.134208    3861 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cert.pem
	I1214 19:15:48.134265    3861 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cert.pem (1123 bytes)
	I1214 19:15:48.134407    3861 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/key.pem, removing ...
	I1214 19:15:48.134410    3861 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/key.pem
	I1214 19:15:48.134465    3861 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/key.pem (1675 bytes)
	I1214 19:15:48.134602    3861 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca-key.pem org=jenkins.functional-20211214191315-1964 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-20211214191315-1964]
	I1214 19:15:48.541072    3861 provision.go:172] copyRemoteCerts
	I1214 19:15:48.541137    3861 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1214 19:15:48.541196    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:15:48.657603    3861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52227 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/functional-20211214191315-1964/id_rsa Username:docker}
	I1214 19:15:48.746055    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1214 19:15:48.762452    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1214 19:15:48.779860    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I1214 19:15:48.796463    3861 provision.go:86] duration metric: configureAuth took 777.886445ms
	I1214 19:15:48.796472    3861 ubuntu.go:193] setting minikube options for container-runtime
	I1214 19:15:48.796652    3861 config.go:176] Loaded profile config "functional-20211214191315-1964": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4
	I1214 19:15:48.796741    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:15:48.914868    3861 main.go:130] libmachine: Using SSH client type: native
	I1214 19:15:48.914998    3861 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397120] 0x139a200 <nil>  [] 0s} 127.0.0.1 52227 <nil> <nil>}
	I1214 19:15:48.915005    3861 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1214 19:15:49.039119    3861 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1214 19:15:49.039132    3861 ubuntu.go:71] root file system type: overlay
	I1214 19:15:49.039306    3861 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1214 19:15:49.039408    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:15:49.157332    3861 main.go:130] libmachine: Using SSH client type: native
	I1214 19:15:49.157488    3861 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397120] 0x139a200 <nil>  [] 0s} 127.0.0.1 52227 <nil> <nil>}
	I1214 19:15:49.157530    3861 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1214 19:15:49.294842    3861 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1214 19:15:49.294961    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:15:49.412154    3861 main.go:130] libmachine: Using SSH client type: native
	I1214 19:15:49.412300    3861 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397120] 0x139a200 <nil>  [] 0s} 127.0.0.1 52227 <nil> <nil>}
	I1214 19:15:49.412310    3861 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1214 19:15:49.542453    3861 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1214 19:15:49.542461    3861 machine.go:91] provisioned docker machine in 2.017610825s
	I1214 19:15:49.542470    3861 start.go:267] post-start starting for "functional-20211214191315-1964" (driver="docker")
	I1214 19:15:49.542473    3861 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1214 19:15:49.542543    3861 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1214 19:15:49.542600    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:15:49.658775    3861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52227 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/functional-20211214191315-1964/id_rsa Username:docker}
	I1214 19:15:49.752882    3861 ssh_runner.go:195] Run: cat /etc/os-release
	I1214 19:15:49.756552    3861 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1214 19:15:49.756564    3861 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1214 19:15:49.756575    3861 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1214 19:15:49.756579    3861 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1214 19:15:49.756587    3861 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/addons for local assets ...
	I1214 19:15:49.756679    3861 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files for local assets ...
	I1214 19:15:49.756822    3861 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files/etc/ssl/certs/19642.pem -> 19642.pem in /etc/ssl/certs
	I1214 19:15:49.756970    3861 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files/etc/test/nested/copy/1964/hosts -> hosts in /etc/test/nested/copy/1964
	I1214 19:15:49.757015    3861 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1964
	I1214 19:15:49.764068    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files/etc/ssl/certs/19642.pem --> /etc/ssl/certs/19642.pem (1708 bytes)
	I1214 19:15:49.780381    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files/etc/test/nested/copy/1964/hosts --> /etc/test/nested/copy/1964/hosts (40 bytes)
	I1214 19:15:49.796511    3861 start.go:270] post-start completed in 253.836206ms
	I1214 19:15:49.796584    3861 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1214 19:15:49.796644    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:15:49.911092    3861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52227 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/functional-20211214191315-1964/id_rsa Username:docker}
	I1214 19:15:49.998268    3861 fix.go:57] fixHost completed within 2.615958738s
	I1214 19:15:49.998282    3861 start.go:80] releasing machines lock for "functional-20211214191315-1964", held for 2.615998134s
	I1214 19:15:49.998389    3861 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20211214191315-1964
	I1214 19:15:50.196870    3861 ssh_runner.go:195] Run: systemctl --version
	I1214 19:15:50.196870    3861 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1214 19:15:50.196940    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:15:50.196952    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:15:50.426963    3861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52227 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/functional-20211214191315-1964/id_rsa Username:docker}
	I1214 19:15:50.426970    3861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52227 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/functional-20211214191315-1964/id_rsa Username:docker}
	I1214 19:15:50.980157    3861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1214 19:15:50.989754    3861 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1214 19:15:50.999117    3861 cruntime.go:257] skipping containerd shutdown because we are bound to it
	I1214 19:15:50.999175    3861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1214 19:15:51.008537    3861 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1214 19:15:51.020804    3861 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1214 19:15:51.097518    3861 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1214 19:15:51.170273    3861 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1214 19:15:51.179891    3861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1214 19:15:51.253682    3861 ssh_runner.go:195] Run: sudo systemctl start docker
	I1214 19:15:51.263183    3861 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1214 19:15:51.302226    3861 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1214 19:15:51.434269    3861 out.go:203] * Preparing Kubernetes v1.22.4 on Docker 20.10.11 ...
	I1214 19:15:51.434469    3861 cli_runner.go:115] Run: docker exec -t functional-20211214191315-1964 dig +short host.docker.internal
	I1214 19:15:51.715372    3861 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1214 19:15:51.715454    3861 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1214 19:15:51.719934    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:15:51.892911    3861 out.go:176]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1214 19:15:51.893078    3861 preload.go:132] Checking if preload exists for k8s version v1.22.4 and runtime docker
	I1214 19:15:51.893230    3861 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1214 19:15:51.926769    3861 docker.go:558] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-20211214191315-1964
	k8s.gcr.io/kube-apiserver:v1.22.4
	k8s.gcr.io/kube-scheduler:v1.22.4
	k8s.gcr.io/kube-controller-manager:v1.22.4
	k8s.gcr.io/kube-proxy:v1.22.4
	kubernetesui/dashboard:v2.3.1
	k8s.gcr.io/etcd:3.5.0-0
	kubernetesui/metrics-scraper:v1.0.7
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	k8s.gcr.io/pause:3.3
	k8s.gcr.io/pause:3.1
	k8s.gcr.io/pause:latest
	
	-- /stdout --
	I1214 19:15:51.926779    3861 docker.go:489] Images already preloaded, skipping extraction
	I1214 19:15:51.926867    3861 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1214 19:15:51.957882    3861 docker.go:558] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-20211214191315-1964
	k8s.gcr.io/kube-apiserver:v1.22.4
	k8s.gcr.io/kube-scheduler:v1.22.4
	k8s.gcr.io/kube-controller-manager:v1.22.4
	k8s.gcr.io/kube-proxy:v1.22.4
	kubernetesui/dashboard:v2.3.1
	k8s.gcr.io/etcd:3.5.0-0
	kubernetesui/metrics-scraper:v1.0.7
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	k8s.gcr.io/pause:3.3
	k8s.gcr.io/pause:3.1
	k8s.gcr.io/pause:latest
	
	-- /stdout --
	I1214 19:15:51.957898    3861 cache_images.go:79] Images are preloaded, skipping loading
	I1214 19:15:51.957989    3861 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1214 19:15:52.077868    3861 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1214 19:15:52.077893    3861 cni.go:93] Creating CNI manager for ""
	I1214 19:15:52.077901    3861 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1214 19:15:52.077908    3861 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1214 19:15:52.077920    3861 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.22.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-20211214191315-1964 NodeName:functional-20211214191315-1964 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1214 19:15:52.078021    3861 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "functional-20211214191315-1964"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1214 19:15:52.078100    3861 kubeadm.go:927] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=functional-20211214191315-1964 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.4 ClusterName:functional-20211214191315-1964 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I1214 19:15:52.078158    3861 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.22.4
	I1214 19:15:52.086126    3861 binaries.go:44] Found k8s binaries, skipping transfer
	I1214 19:15:52.086185    3861 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1214 19:15:52.093235    3861 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (356 bytes)
	I1214 19:15:52.105816    3861 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1214 19:15:52.118253    3861 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1923 bytes)
	I1214 19:15:52.131019    3861 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1214 19:15:52.135028    3861 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964 for IP: 192.168.49.2
	I1214 19:15:52.152083    3861 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.key
	I1214 19:15:52.152139    3861 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/proxy-client-ca.key
	I1214 19:15:52.152233    3861 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.key
	I1214 19:15:52.152297    3861 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/apiserver.key.dd3b5fb2
	I1214 19:15:52.152352    3861 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/proxy-client.key
	I1214 19:15:52.152569    3861 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/1964.pem (1338 bytes)
	W1214 19:15:52.152684    3861 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/1964_empty.pem, impossibly tiny 0 bytes
	I1214 19:15:52.152696    3861 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca-key.pem (1675 bytes)
	I1214 19:15:52.152735    3861 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca.pem (1078 bytes)
	I1214 19:15:52.152769    3861 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/cert.pem (1123 bytes)
	I1214 19:15:52.152802    3861 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/key.pem (1675 bytes)
	I1214 19:15:52.152877    3861 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files/etc/ssl/certs/19642.pem (1708 bytes)
	I1214 19:15:52.153781    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1214 19:15:52.172993    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1214 19:15:52.190202    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1214 19:15:52.208351    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1214 19:15:52.225564    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1214 19:15:52.242692    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1214 19:15:52.259357    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1214 19:15:52.276377    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1214 19:15:52.293188    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1214 19:15:52.311980    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/1964.pem --> /usr/share/ca-certificates/1964.pem (1338 bytes)
	I1214 19:15:52.330188    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files/etc/ssl/certs/19642.pem --> /usr/share/ca-certificates/19642.pem (1708 bytes)
	I1214 19:15:52.347120    3861 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1214 19:15:52.360307    3861 ssh_runner.go:195] Run: openssl version
	I1214 19:15:52.365854    3861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1214 19:15:52.374169    3861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1214 19:15:52.378721    3861 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 15 03:07 /usr/share/ca-certificates/minikubeCA.pem
	I1214 19:15:52.378783    3861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1214 19:15:52.384365    3861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1214 19:15:52.392054    3861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1964.pem && ln -fs /usr/share/ca-certificates/1964.pem /etc/ssl/certs/1964.pem"
	I1214 19:15:52.399734    3861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1964.pem
	I1214 19:15:52.403583    3861 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 15 03:13 /usr/share/ca-certificates/1964.pem
	I1214 19:15:52.403626    3861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1964.pem
	I1214 19:15:52.409201    3861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1964.pem /etc/ssl/certs/51391683.0"
	I1214 19:15:52.417050    3861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19642.pem && ln -fs /usr/share/ca-certificates/19642.pem /etc/ssl/certs/19642.pem"
	I1214 19:15:52.425296    3861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19642.pem
	I1214 19:15:52.429475    3861 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 15 03:13 /usr/share/ca-certificates/19642.pem
	I1214 19:15:52.429526    3861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19642.pem
	I1214 19:15:52.435155    3861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19642.pem /etc/ssl/certs/3ec20f2e.0"
	I1214 19:15:52.457896    3861 kubeadm.go:390] StartCluster: {Name:functional-20211214191315-1964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4 ClusterName:functional-20211214191315-1964 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.4 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-p
rovisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1214 19:15:52.458126    3861 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1214 19:15:52.488752    3861 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1214 19:15:52.496530    3861 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I1214 19:15:52.496537    3861 kubeadm.go:600] restartCluster start
	I1214 19:15:52.496594    3861 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1214 19:15:52.503469    3861 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1214 19:15:52.503541    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:15:52.681904    3861 kubeconfig.go:92] found "functional-20211214191315-1964" server: "https://127.0.0.1:52226"
	I1214 19:15:52.684983    3861 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1214 19:15:52.692886    3861 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2021-12-15 03:14:08.651658086 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2021-12-15 03:15:52.128043025 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I1214 19:15:52.692894    3861 kubeadm.go:1050] stopping kube-system containers ...
	I1214 19:15:52.692982    3861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1214 19:15:52.723325    3861 docker.go:390] Stopping containers: [f64a119e0f9e 2aa4dddf31ca d3053162ed40 e710431b012a 04a83caa39cf f7375b96ea70 b51d2e575c77 cf796b7222f1 98cdd8174af9 80134238c59c cfc1a8507ed7 7a8aaa408242 7e979a288049 9168974b805c ed86ade71b65]
	I1214 19:15:52.723433    3861 ssh_runner.go:195] Run: docker stop f64a119e0f9e 2aa4dddf31ca d3053162ed40 e710431b012a 04a83caa39cf f7375b96ea70 b51d2e575c77 cf796b7222f1 98cdd8174af9 80134238c59c cfc1a8507ed7 7a8aaa408242 7e979a288049 9168974b805c ed86ade71b65
	I1214 19:15:57.934541    3861 ssh_runner.go:235] Completed: docker stop f64a119e0f9e 2aa4dddf31ca d3053162ed40 e710431b012a 04a83caa39cf f7375b96ea70 b51d2e575c77 cf796b7222f1 98cdd8174af9 80134238c59c cfc1a8507ed7 7a8aaa408242 7e979a288049 9168974b805c ed86ade71b65: (5.20820415s)
	I1214 19:15:57.934629    3861 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1214 19:15:57.972652    3861 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1214 19:15:57.980659    3861 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Dec 15 03:14 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec 15 03:14 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Dec 15 03:14 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 15 03:14 /etc/kubernetes/scheduler.conf
	
	I1214 19:15:57.980715    3861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1214 19:15:57.988271    3861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1214 19:15:57.997014    3861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1214 19:15:58.004328    3861 kubeadm.go:165] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1214 19:15:58.004390    3861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1214 19:15:58.011437    3861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1214 19:15:58.018629    3861 kubeadm.go:165] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1214 19:15:58.018684    3861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1214 19:15:58.025642    3861 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1214 19:15:58.033024    3861 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1214 19:15:58.033031    3861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1214 19:15:58.080004    3861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1214 19:15:58.868511    3861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1214 19:15:58.998109    3861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1214 19:15:59.051261    3861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1214 19:15:59.108180    3861 api_server.go:51] waiting for apiserver process to appear ...
	I1214 19:15:59.108249    3861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1214 19:15:59.124246    3861 api_server.go:71] duration metric: took 16.065396ms to wait for apiserver process to appear ...
	I1214 19:15:59.124258    3861 api_server.go:87] waiting for apiserver healthz status ...
	I1214 19:15:59.124267    3861 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52226/healthz ...
	I1214 19:15:59.129903    3861 api_server.go:266] https://127.0.0.1:52226/healthz returned 200:
	ok
	I1214 19:15:59.137061    3861 api_server.go:140] control plane version: v1.22.4
	I1214 19:15:59.137069    3861 api_server.go:130] duration metric: took 12.803292ms to wait for apiserver health ...
	I1214 19:15:59.137074    3861 cni.go:93] Creating CNI manager for ""
	I1214 19:15:59.137078    3861 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1214 19:15:59.137085    3861 system_pods.go:43] waiting for kube-system pods to appear ...
	I1214 19:15:59.145722    3861 system_pods.go:59] 7 kube-system pods found
	I1214 19:15:59.145734    3861 system_pods.go:61] "coredns-78fcd69978-2p7ps" [1c4fe165-b3ab-48b7-80a8-09a029198ddc] Running
	I1214 19:15:59.145742    3861 system_pods.go:61] "etcd-functional-20211214191315-1964" [bc9932a2-b1d8-444e-a9d9-ee1cdaae56a6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1214 19:15:59.145746    3861 system_pods.go:61] "kube-apiserver-functional-20211214191315-1964" [d931a160-85af-4b99-b6f4-768eb20bbd53] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1214 19:15:59.145754    3861 system_pods.go:61] "kube-controller-manager-functional-20211214191315-1964" [f8cb9db7-809e-4767-aec3-a519992abf8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1214 19:15:59.145756    3861 system_pods.go:61] "kube-proxy-gxjj9" [2c8a5259-a85d-419b-a99e-9db6b5bb9984] Running
	I1214 19:15:59.145759    3861 system_pods.go:61] "kube-scheduler-functional-20211214191315-1964" [dcf4179d-7ecb-4ab2-9231-0b61695280eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1214 19:15:59.145762    3861 system_pods.go:61] "storage-provisioner" [fb70bc64-478f-4146-9565-9aa0691bc521] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1214 19:15:59.145765    3861 system_pods.go:74] duration metric: took 8.67376ms to wait for pod list to return data ...
	I1214 19:15:59.145768    3861 node_conditions.go:102] verifying NodePressure condition ...
	I1214 19:15:59.149042    3861 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I1214 19:15:59.149052    3861 node_conditions.go:123] node cpu capacity is 6
	I1214 19:15:59.149058    3861 node_conditions.go:105] duration metric: took 3.286769ms to run NodePressure ...
	I1214 19:15:59.149067    3861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1214 19:15:59.488621    3861 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I1214 19:15:59.492993    3861 kubeadm.go:746] kubelet initialised
	I1214 19:15:59.492998    3861 kubeadm.go:747] duration metric: took 4.367716ms waiting for restarted kubelet to initialise ...
	I1214 19:15:59.493002    3861 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1214 19:15:59.498307    3861 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-2p7ps" in "kube-system" namespace to be "Ready" ...
	I1214 19:15:59.509737    3861 pod_ready.go:92] pod "coredns-78fcd69978-2p7ps" in "kube-system" namespace has status "Ready":"True"
	I1214 19:15:59.509741    3861 pod_ready.go:81] duration metric: took 11.418663ms waiting for pod "coredns-78fcd69978-2p7ps" in "kube-system" namespace to be "Ready" ...
	I1214 19:15:59.509747    3861 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-20211214191315-1964" in "kube-system" namespace to be "Ready" ...
	I1214 19:16:00.027411    3861 pod_ready.go:97] node "functional-20211214191315-1964" hosting pod "etcd-functional-20211214191315-1964" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211214191315-1964" has status "Ready":"False"
	I1214 19:16:00.027420    3861 pod_ready.go:81] duration metric: took 517.458244ms waiting for pod "etcd-functional-20211214191315-1964" in "kube-system" namespace to be "Ready" ...
	E1214 19:16:00.027425    3861 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211214191315-1964" hosting pod "etcd-functional-20211214191315-1964" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211214191315-1964" has status "Ready":"False"
	I1214 19:16:00.027437    3861 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-20211214191315-1964" in "kube-system" namespace to be "Ready" ...
	I1214 19:16:00.031880    3861 pod_ready.go:97] node "functional-20211214191315-1964" hosting pod "kube-apiserver-functional-20211214191315-1964" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211214191315-1964" has status "Ready":"False"
	I1214 19:16:00.031889    3861 pod_ready.go:81] duration metric: took 4.444756ms waiting for pod "kube-apiserver-functional-20211214191315-1964" in "kube-system" namespace to be "Ready" ...
	E1214 19:16:00.031893    3861 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211214191315-1964" hosting pod "kube-apiserver-functional-20211214191315-1964" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211214191315-1964" has status "Ready":"False"
	I1214 19:16:00.031901    3861 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-20211214191315-1964" in "kube-system" namespace to be "Ready" ...
	I1214 19:16:00.036881    3861 pod_ready.go:97] node "functional-20211214191315-1964" hosting pod "kube-controller-manager-functional-20211214191315-1964" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211214191315-1964" has status "Ready":"False"
	I1214 19:16:00.036894    3861 pod_ready.go:81] duration metric: took 4.986954ms waiting for pod "kube-controller-manager-functional-20211214191315-1964" in "kube-system" namespace to be "Ready" ...
	E1214 19:16:00.036899    3861 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211214191315-1964" hosting pod "kube-controller-manager-functional-20211214191315-1964" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211214191315-1964" has status "Ready":"False"
	I1214 19:16:00.036907    3861 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gxjj9" in "kube-system" namespace to be "Ready" ...
	I1214 19:16:00.348026    3861 pod_ready.go:97] node "functional-20211214191315-1964" hosting pod "kube-proxy-gxjj9" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211214191315-1964" has status "Ready":"False"
	I1214 19:16:00.348036    3861 pod_ready.go:81] duration metric: took 310.99947ms waiting for pod "kube-proxy-gxjj9" in "kube-system" namespace to be "Ready" ...
	E1214 19:16:00.348041    3861 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211214191315-1964" hosting pod "kube-proxy-gxjj9" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211214191315-1964" has status "Ready":"False"
	I1214 19:16:00.348049    3861 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-20211214191315-1964" in "kube-system" namespace to be "Ready" ...
	I1214 19:16:00.741441    3861 pod_ready.go:97] node "functional-20211214191315-1964" hosting pod "kube-scheduler-functional-20211214191315-1964" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211214191315-1964" has status "Ready":"False"
	I1214 19:16:00.741452    3861 pod_ready.go:81] duration metric: took 393.246563ms waiting for pod "kube-scheduler-functional-20211214191315-1964" in "kube-system" namespace to be "Ready" ...
	E1214 19:16:00.741459    3861 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211214191315-1964" hosting pod "kube-scheduler-functional-20211214191315-1964" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211214191315-1964" has status "Ready":"False"
	I1214 19:16:00.741470    3861 pod_ready.go:38] duration metric: took 1.247963569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1214 19:16:00.741484    3861 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1214 19:16:00.754546    3861 ops.go:34] apiserver oom_adj: -16
	I1214 19:16:00.754552    3861 kubeadm.go:604] restartCluster took 8.253811473s
	I1214 19:16:00.754560    3861 kubeadm.go:392] StartCluster complete in 8.292472061s
	I1214 19:16:00.754570    3861 settings.go:142] acquiring lock: {Name:mk93abdcbbc46dc3353c37938fd5d548af35ef3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1214 19:16:00.754655    3861 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig
	I1214 19:16:00.755134    3861 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig: {Name:mk605b877d3a6907cdf2ed75edbb40b36491c1e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1214 19:16:00.761166    3861 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "functional-20211214191315-1964" rescaled to 1
	I1214 19:16:00.761189    3861 start.go:207] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.4 ControlPlane:true Worker:true}
	I1214 19:16:00.761208    3861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1214 19:16:00.809498    3861 out.go:176] * Verifying Kubernetes components...
	I1214 19:16:00.761223    3861 addons.go:415] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I1214 19:16:00.761356    3861 config.go:176] Loaded profile config "functional-20211214191315-1964": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4
	I1214 19:16:00.809618    3861 addons.go:65] Setting storage-provisioner=true in profile "functional-20211214191315-1964"
	I1214 19:16:00.809623    3861 addons.go:65] Setting default-storageclass=true in profile "functional-20211214191315-1964"
	I1214 19:16:00.809639    3861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1214 19:16:00.809646    3861 addons.go:153] Setting addon storage-provisioner=true in "functional-20211214191315-1964"
	W1214 19:16:00.809652    3861 addons.go:165] addon storage-provisioner should already be in state true
	I1214 19:16:00.809666    3861 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-20211214191315-1964"
	I1214 19:16:00.809716    3861 host.go:66] Checking if "functional-20211214191315-1964" exists ...
	I1214 19:16:00.810235    3861 cli_runner.go:115] Run: docker container inspect functional-20211214191315-1964 --format={{.State.Status}}
	I1214 19:16:00.810426    3861 cli_runner.go:115] Run: docker container inspect functional-20211214191315-1964 --format={{.State.Status}}
	I1214 19:16:00.824519    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:16:00.876037    3861 start.go:754] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1214 19:16:00.959391    3861 addons.go:153] Setting addon default-storageclass=true in "functional-20211214191315-1964"
	I1214 19:16:00.974938    3861 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1214 19:16:00.974949    3861 addons.go:165] addon default-storageclass should already be in state true
	I1214 19:16:00.975012    3861 host.go:66] Checking if "functional-20211214191315-1964" exists ...
	I1214 19:16:00.975052    3861 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1214 19:16:00.975057    3861 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1214 19:16:00.975151    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:16:00.977075    3861 cli_runner.go:115] Run: docker container inspect functional-20211214191315-1964 --format={{.State.Status}}
	I1214 19:16:00.978087    3861 node_ready.go:35] waiting up to 6m0s for node "functional-20211214191315-1964" to be "Ready" ...
	I1214 19:16:01.103777    3861 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1214 19:16:01.103763    3861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52227 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/functional-20211214191315-1964/id_rsa Username:docker}
	I1214 19:16:01.103785    3861 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1214 19:16:01.103865    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:16:01.209054    3861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1214 19:16:01.229911    3861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52227 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/functional-20211214191315-1964/id_rsa Username:docker}
	I1214 19:16:01.327497    3861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1214 19:16:01.825801    3861 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I1214 19:16:01.825819    3861 addons.go:417] enableAddons completed in 1.064215937s
	I1214 19:16:02.986728    3861 node_ready.go:58] node "functional-20211214191315-1964" has status "Ready":"False"
	I1214 19:16:13.520274    3861 node_ready.go:53] error getting node "functional-20211214191315-1964": Get "https://127.0.0.1:52226/api/v1/nodes/functional-20211214191315-1964": EOF
	I1214 19:16:13.520283    3861 node_ready.go:38] duration metric: took 12.538938152s waiting for node "functional-20211214191315-1964" to be "Ready" ...
	I1214 19:16:13.546098    3861 out.go:176] 
	W1214 19:16:13.546259    3861 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-20211214191315-1964": Get "https://127.0.0.1:52226/api/v1/nodes/functional-20211214191315-1964": EOF
	W1214 19:16:13.546274    3861 out.go:241] * 
	W1214 19:16:13.547407    3861 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2021-12-15 03:13:38 UTC, end at Wed 2021-12-15 03:16:15 UTC. --
	Dec 15 03:14:05 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:14:05.485563900Z" level=info msg="Docker daemon" commit=847da18 graphdriver(s)=overlay2 version=20.10.11
	Dec 15 03:14:05 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:14:05.485619057Z" level=info msg="Daemon has completed initialization"
	Dec 15 03:14:05 functional-20211214191315-1964 systemd[1]: Started Docker Application Container Engine.
	Dec 15 03:14:05 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:14:05.514652808Z" level=info msg="API listen on [::]:2376"
	Dec 15 03:14:05 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:14:05.517634997Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 15 03:14:38 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:14:38.343980683Z" level=info msg="ignoring event" container=4947489144f7c5ffdb9999a5c820ca584c1c7ac34c664a343a8d023c7988858c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:10 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:10.002435615Z" level=info msg="ignoring event" container=2aa4dddf31ca5c95bb11ef5d391587a62994eaa6673f5b882a3e891b31620d02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:52 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:52.992313976Z" level=info msg="ignoring event" container=d3053162ed40383fbfc2e2d73d796b128b99547df390c2b1cc1dd4d262364777 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:52 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:52.993670031Z" level=info msg="ignoring event" container=f64a119e0f9ec9f1700de287f1c76c4d34480b79c193a1362ab79a4fd837a807 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:52 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:52.993692031Z" level=info msg="ignoring event" container=f7375b96ea701aecbe693239dc7bdafad90a75e2b2ee093b8bfdd051b22dd67c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:53 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:53.076282620Z" level=info msg="ignoring event" container=7e979a288049ff2cb5f990315fd55220e32bd4d18e96a2fc54b653c0d5d6abe5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:53 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:53.076340093Z" level=info msg="ignoring event" container=9168974b805ca0bed1cc3aa576020a6e552b8d3136cb72d7b2943ba7f9cc6399 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:53 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:53.077314827Z" level=info msg="ignoring event" container=b51d2e575c77af287b3c808a5bfadff0dec83dd6eb521c962f660ab11f79a74b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:53 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:53.077377683Z" level=info msg="ignoring event" container=ed86ade71b65f89b7e56691b91945252e14f13a5bbd396140ef689d837a15322 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:53 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:53.080372813Z" level=info msg="ignoring event" container=7a8aaa4082423495b515321130d7b8e3d74cd9fd31c53150f47035f7a35bf64e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:53 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:53.092667993Z" level=info msg="ignoring event" container=cf796b7222f10a52265b4a98ebe7187cdcc086bdaef20996641cbf54a79a3a85 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:53 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:53.092697093Z" level=info msg="ignoring event" container=80134238c59ca704502f0fd924df88801bb39b532a8bf20ba02e926746b41180 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:53 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:53.092716016Z" level=info msg="ignoring event" container=04a83caa39cf4148989d264e7469d4fda6721330e50799c42644ffdff8850972 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:54 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:54.173220604Z" level=info msg="ignoring event" container=98cdd8174af90761a11855ba92e246cf2608cf1729505a3eeb340442e871ff3d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:54 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:54.271695360Z" level=info msg="ignoring event" container=cfc1a8507ed775e23077bce53467504e1b363d7ecebd2fd9b2ec450ea053ad64 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:57 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:57.920802475Z" level=info msg="ignoring event" container=e710431b012add6f7f0da320605934ec185993a9d21feaa44688e2e802f79263 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:16:02 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:16:02.017450721Z" level=info msg="ignoring event" container=17332ff1609831c6b4d66cd6330e22efebcaec7515cdcd5b0be71fc683ea664b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:16:03 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:16:03.054691935Z" level=info msg="ignoring event" container=64b91154d522d9d9e38bee0e5e33ce57bc292e4aafbe12e94fc9837d87c12660 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:16:03 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:16:03.115519622Z" level=info msg="ignoring event" container=d5bb4b81c6217c89f0e371c4d5ff1f606f0bde69c49a85db7f5156eac3609eb6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:16:03 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:16:03.147142252Z" level=info msg="ignoring event" container=accfe53dabda18e036cfb0852026f12b75de12ee22b7913018a8924c68bff72d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	64b91154d522d       8a5cc299272d9       13 seconds ago       Exited              kube-apiserver            1                   f9a74e91b429e
	4439a98b43e61       8d147537fb7d1       13 seconds ago       Running             coredns                   1                   e73c505cafa31
	ef1934ce8f8ac       6e38f40d628db       13 seconds ago       Running             storage-provisioner       2                   8f1305c3916db
	8c1c7474f72e2       0ce02f92d3e43       21 seconds ago       Running             kube-controller-manager   1                   4098fe804243a
	7c6987317e156       721ba97f54a65       21 seconds ago       Running             kube-scheduler            1                   6730186d71be2
	dcc8cf8c80d7f       0048118155842       21 seconds ago       Running             etcd                      1                   28efa3afb248b
	5d3ffdec1fc03       edeff87e48029       21 seconds ago       Running             kube-proxy                1                   f87f63e74d16e
	f64a119e0f9ec       6e38f40d628db       About a minute ago   Exited              storage-provisioner       1                   d3053162ed403
	e710431b012ad       8d147537fb7d1       About a minute ago   Exited              coredns                   0                   f7375b96ea701
	04a83caa39cf4       edeff87e48029       About a minute ago   Exited              kube-proxy                0                   b51d2e575c77a
	cf796b7222f10       0048118155842       About a minute ago   Exited              etcd                      0                   7a8aaa4082423
	98cdd8174af90       721ba97f54a65       About a minute ago   Exited              kube-scheduler            0                   7e979a288049f
	80134238c59ca       0ce02f92d3e43       About a minute ago   Exited              kube-controller-manager   0                   9168974b805ca
	
	* 
	* ==> coredns [4439a98b43e6] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	W1215 03:16:03.128120       1 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
	W1215 03:16:03.128209       1 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
	W1215 03:16:03.128269       1 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
	E1215 03:16:03.981876       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=529": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:04.268693       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=529": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:04.479070       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=529": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:05.834363       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=529": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:06.025058       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=529": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:06.561972       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=529": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:10.448457       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=529": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:10.684428       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=529": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:11.829825       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=529": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [e710431b012a] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.047018] bpfilter: read fail 0
	[  +0.033150] bpfilter: read fail 0
	[  +0.028058] bpfilter: read fail 0
	[  +0.034935] bpfilter: write fail -32
	[  +0.028918] bpfilter: read fail 0
	[  +0.031942] bpfilter: read fail 0
	[  +0.030595] bpfilter: read fail 0
	[  +0.045335] bpfilter: write fail -32
	[  +0.032235] bpfilter: read fail 0
	[  +0.025872] bpfilter: read fail 0
	[  +0.031574] bpfilter: write fail -32
	[  +0.039860] bpfilter: read fail 0
	[  +0.021811] bpfilter: write fail -32
	[  +0.023679] bpfilter: read fail 0
	[  +0.030014] bpfilter: read fail 0
	[  +0.028810] bpfilter: read fail 0
	[  +0.032187] bpfilter: read fail 0
	[  +0.042584] bpfilter: read fail 0
	[  +0.036333] bpfilter: read fail 0
	[  +0.036147] bpfilter: read fail 0
	[  +0.038902] bpfilter: read fail 0
	[  +0.034192] bpfilter: read fail 0
	[  +0.042783] bpfilter: read fail 0
	[  +0.027472] bpfilter: write fail -32
	[  +0.030119] bpfilter: read fail 0
	
	* 
	* ==> etcd [cf796b7222f1] <==
	* {"level":"info","ts":"2021-12-15T03:14:20.247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2021-12-15T03:14:20.247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2021-12-15T03:14:20.247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2021-12-15T03:14:20.247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2021-12-15T03:14:20.247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2021-12-15T03:14:20.247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2021-12-15T03:14:20.247Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-15T03:14:20.248Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-15T03:14:20.248Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-15T03:14:20.248Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-15T03:14:20.248Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20211214191315-1964 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2021-12-15T03:14:20.248Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-15T03:14:20.249Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-12-15T03:14:20.249Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-12-15T03:14:20.248Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-15T03:14:20.249Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-12-15T03:14:20.249Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2021-12-15T03:15:52.876Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2021-12-15T03:15:52.876Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20211214191315-1964","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2021/12/15 03:15:52 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	WARNING: 2021/12/15 03:15:52 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2021-12-15T03:15:52.886Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2021-12-15T03:15:52.966Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-12-15T03:15:52.968Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-12-15T03:15:52.968Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20211214191315-1964","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> etcd [dcc8cf8c80d7] <==
	* {"level":"info","ts":"2021-12-15T03:15:54.889Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2021-12-15T03:15:54.890Z","caller":"etcdserver/server.go:834","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.0","cluster-id":"fa54960ea34d58be","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-15T03:15:54.891Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2021-12-15T03:15:54.891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2021-12-15T03:15:54.891Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2021-12-15T03:15:54.891Z","caller":"membership/cluster.go:523","msg":"updated cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","from":"3.5","to":"3.5"}
	{"level":"info","ts":"2021-12-15T03:15:54.894Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-12-15T03:15:54.894Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-12-15T03:15:54.894Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-12-15T03:15:54.894Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-12-15T03:15:54.894Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-12-15T03:15:55.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2021-12-15T03:15:55.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2021-12-15T03:15:55.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2021-12-15T03:15:55.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2021-12-15T03:15:55.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2021-12-15T03:15:55.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2021-12-15T03:15:55.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2021-12-15T03:15:55.589Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20211214191315-1964 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2021-12-15T03:15:55.589Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-15T03:15:55.590Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-15T03:15:55.590Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2021-12-15T03:15:55.591Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-12-15T03:15:55.594Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-12-15T03:15:55.594Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  03:16:16 up 10 min,  0 users,  load average: 2.05, 1.87, 1.13
	Linux functional-20211214191315-1964 5.10.25-linuxkit #1 SMP Tue Mar 23 09:27:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [64b91154d522] <==
	* I1215 03:16:03.036097       1 server.go:553] external host was not specified, using 192.168.49.2
	I1215 03:16:03.036579       1 server.go:161] Version: v1.22.4
	Error: failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use
	
	* 
	* ==> kube-controller-manager [80134238c59c] <==
	* I1215 03:14:36.876093       1 shared_informer.go:247] Caches are synced for PVC protection 
	I1215 03:14:36.880492       1 shared_informer.go:247] Caches are synced for persistent volume 
	I1215 03:14:36.910898       1 shared_informer.go:247] Caches are synced for stateful set 
	I1215 03:14:36.913435       1 shared_informer.go:247] Caches are synced for disruption 
	I1215 03:14:36.913461       1 disruption.go:371] Sending events to api server.
	I1215 03:14:36.918353       1 shared_informer.go:247] Caches are synced for taint 
	I1215 03:14:36.918396       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1215 03:14:36.918408       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	W1215 03:14:36.918463       1 node_lifecycle_controller.go:1013] Missing timestamp for Node functional-20211214191315-1964. Assuming now as a timestamp.
	I1215 03:14:36.918481       1 node_lifecycle_controller.go:1214] Controller detected that zone  is now in state Normal.
	I1215 03:14:36.918522       1 event.go:291] "Event occurred" object="functional-20211214191315-1964" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-20211214191315-1964 event: Registered Node functional-20211214191315-1964 in Controller"
	I1215 03:14:36.943593       1 shared_informer.go:247] Caches are synced for resource quota 
	I1215 03:14:36.960418       1 shared_informer.go:247] Caches are synced for daemon sets 
	I1215 03:14:36.963217       1 shared_informer.go:247] Caches are synced for resource quota 
	I1215 03:14:37.063224       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I1215 03:14:37.113606       1 shared_informer.go:247] Caches are synced for attach detach 
	I1215 03:14:37.316683       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2"
	I1215 03:14:37.484736       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1215 03:14:37.508173       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1215 03:14:37.508228       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1215 03:14:37.554526       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-78fcd69978 to 1"
	I1215 03:14:37.669605       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gxjj9"
	I1215 03:14:37.766673       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-28d69"
	I1215 03:14:37.770138       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-2p7ps"
	I1215 03:14:37.783176       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-78fcd69978-28d69"
	
	* 
	* ==> kube-controller-manager [8c1c7474f72e] <==
	* I1215 03:15:59.388861       1 shared_informer.go:240] Waiting for caches to sync for TTL
	I1215 03:15:59.391663       1 controllermanager.go:577] Started "tokencleaner"
	I1215 03:15:59.391797       1 tokencleaner.go:118] Starting token cleaner controller
	I1215 03:15:59.391824       1 shared_informer.go:240] Waiting for caches to sync for token_cleaner
	I1215 03:15:59.391832       1 shared_informer.go:247] Caches are synced for token_cleaner 
	I1215 03:15:59.393622       1 controllermanager.go:577] Started "endpointslicemirroring"
	I1215 03:15:59.393651       1 endpointslicemirroring_controller.go:212] Starting EndpointSliceMirroring controller
	I1215 03:15:59.393655       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
	I1215 03:15:59.414246       1 controllermanager.go:577] Started "replicationcontroller"
	I1215 03:15:59.414292       1 replica_set.go:186] Starting replicationcontroller controller
	I1215 03:15:59.414300       1 shared_informer.go:240] Waiting for caches to sync for ReplicationController
	I1215 03:15:59.469414       1 controllermanager.go:577] Started "ephemeral-volume"
	I1215 03:15:59.469485       1 controller.go:170] Starting ephemeral volume controller
	I1215 03:15:59.469492       1 shared_informer.go:240] Waiting for caches to sync for ephemeral
	I1215 03:15:59.514954       1 controllermanager.go:577] Started "deployment"
	W1215 03:15:59.514987       1 core.go:245] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
	W1215 03:15:59.514991       1 controllermanager.go:569] Skipping "route"
	I1215 03:15:59.515055       1 deployment_controller.go:153] "Starting controller" controller="deployment"
	I1215 03:15:59.515062       1 shared_informer.go:240] Waiting for caches to sync for deployment
	I1215 03:15:59.567662       1 node_ipam_controller.go:91] Sending events to api server.
	W1215 03:16:09.569725       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	W1215 03:16:10.070297       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	W1215 03:16:11.070723       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	W1215 03:16:13.072240       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	E1215 03:16:13.072328       1 cidr_allocator.go:137] Failed to list all nodes: Get "https://192.168.49.2:8441/api/v1/nodes": failed to get token for kube-system/node-controller: timed out waiting for the condition
	
	* 
	* ==> kube-proxy [04a83caa39cf] <==
	* I1215 03:14:38.478529       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I1215 03:14:38.478616       1 server_others.go:140] Detected node IP 192.168.49.2
	W1215 03:14:38.478629       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I1215 03:14:40.670471       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I1215 03:14:40.670534       1 server_others.go:212] Using iptables Proxier.
	I1215 03:14:40.670542       1 server_others.go:219] creating dualStackProxier for iptables.
	W1215 03:14:40.670554       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I1215 03:14:40.670925       1 server.go:649] Version: v1.22.4
	I1215 03:14:40.671441       1 config.go:315] Starting service config controller
	I1215 03:14:40.671679       1 config.go:224] Starting endpoint slice config controller
	I1215 03:14:40.671690       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1215 03:14:40.671795       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1215 03:14:40.772107       1 shared_informer.go:247] Caches are synced for service config 
	I1215 03:14:40.772108       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1215 03:14:55.385982       1 trace.go:205] Trace[1574108429]: "iptables restore" (15-Dec-2021 03:14:53.339) (total time: 2046ms):
	Trace[1574108429]: [2.046814362s] [2.046814362s] END
	I1215 03:15:05.078911       1 trace.go:205] Trace[861784797]: "iptables restore" (15-Dec-2021 03:15:02.815) (total time: 2262ms):
	Trace[861784797]: [2.262954534s] [2.262954534s] END
	I1215 03:15:27.560508       1 trace.go:205] Trace[2080217812]: "iptables restore" (15-Dec-2021 03:15:25.495) (total time: 2064ms):
	Trace[2080217812]: [2.064668229s] [2.064668229s] END
	
	* 
	* ==> kube-proxy [5d3ffdec1fc0] <==
	* E1215 03:15:54.787435       1 node.go:161] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20211214191315-1964": dial tcp 192.168.49.2:8441: connect: connection refused
	I1215 03:15:57.291183       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I1215 03:15:57.291203       1 server_others.go:140] Detected node IP 192.168.49.2
	W1215 03:15:57.291256       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I1215 03:15:59.674391       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I1215 03:15:59.674438       1 server_others.go:212] Using iptables Proxier.
	I1215 03:15:59.674447       1 server_others.go:219] creating dualStackProxier for iptables.
	W1215 03:15:59.674455       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I1215 03:15:59.674910       1 server.go:649] Version: v1.22.4
	I1215 03:15:59.675386       1 config.go:224] Starting endpoint slice config controller
	I1215 03:15:59.675433       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1215 03:15:59.675486       1 config.go:315] Starting service config controller
	I1215 03:15:59.675528       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1215 03:15:59.776270       1 shared_informer.go:247] Caches are synced for service config 
	I1215 03:15:59.776319       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1215 03:16:08.509120       1 trace.go:205] Trace[1732331907]: "iptables restore" (15-Dec-2021 03:16:06.402) (total time: 2106ms):
	Trace[1732331907]: [2.106941004s] [2.106941004s] END
	
	* 
	* ==> kube-scheduler [7c6987317e15] <==
	* I1215 03:15:55.430561       1 serving.go:347] Generated self-signed cert in-memory
	W1215 03:15:57.210880       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1215 03:15:57.210915       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1215 03:15:57.210931       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1215 03:15:57.210935       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1215 03:15:57.277759       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I1215 03:15:57.277853       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1215 03:15:57.277921       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1215 03:15:57.278236       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E1215 03:15:57.287140       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1215 03:15:57.287176       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1215 03:15:57.287206       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1215 03:15:57.287226       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1215 03:15:57.287231       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1215 03:15:57.287242       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1215 03:15:57.287257       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1215 03:15:57.288506       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I1215 03:15:57.378199       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [98cdd8174af9] <==
	* E1215 03:14:22.036186       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1215 03:14:22.036497       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1215 03:14:22.036752       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1215 03:14:22.036880       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1215 03:14:22.036888       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1215 03:14:22.037145       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1215 03:14:22.037373       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1215 03:14:22.037471       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1215 03:14:22.038776       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1215 03:14:22.040260       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1215 03:14:22.040387       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1215 03:14:22.040459       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1215 03:14:22.040523       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1215 03:14:22.040630       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1215 03:14:22.893616       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1215 03:14:22.959362       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1215 03:14:22.963661       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1215 03:14:23.001823       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1215 03:14:23.038634       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1215 03:14:23.063601       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1215 03:14:23.092531       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1215 03:14:23.434543       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I1215 03:15:52.893347       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1215 03:15:52.893758       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1215 03:15:52.893998       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-12-15 03:13:38 UTC, end at Wed 2021-12-15 03:16:17 UTC. --
	Dec 15 03:16:09 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:09.597863    6116 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting node \"functional-20211214191315-1964\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20211214191315-1964?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:09 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:09.597926    6116 kubelet_node_status.go:457] "Unable to update node status" err="update node status exceeds retry count"
	Dec 15 03:16:09 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:09.616869    6116 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20211214191315-1964?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Dec 15 03:16:09 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:09.617100    6116 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20211214191315-1964?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Dec 15 03:16:09 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:09.617307    6116 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20211214191315-1964?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Dec 15 03:16:09 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:09.617479    6116 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20211214191315-1964?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Dec 15 03:16:09 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:09.617733    6116 controller.go:187] failed to update lease, error: Put "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20211214191315-1964?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Dec 15 03:16:09 functional-20211214191315-1964 kubelet[6116]: I1215 03:16:09.617778    6116 controller.go:114] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease
	Dec 15 03:16:09 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:09.617969    6116 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20211214191315-1964?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Dec 15 03:16:09 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:09.819339    6116 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20211214191315-1964?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Dec 15 03:16:10 functional-20211214191315-1964 kubelet[6116]: I1215 03:16:10.172460    6116 status_manager.go:601] "Failed to get status for pod" podUID=158b4d3fe2f0b69d80a4c203dc10174b pod="kube-system/kube-scheduler-functional-20211214191315-1964" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20211214191315-1964\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:10 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:10.220378    6116 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20211214191315-1964?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Dec 15 03:16:10 functional-20211214191315-1964 kubelet[6116]: I1215 03:16:10.273316    6116 status_manager.go:601] "Failed to get status for pod" podUID=158b4d3fe2f0b69d80a4c203dc10174b pod="kube-system/kube-scheduler-functional-20211214191315-1964" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20211214191315-1964\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:11 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:11.021447    6116 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20211214191315-1964?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Dec 15 03:16:11 functional-20211214191315-1964 kubelet[6116]: I1215 03:16:11.081599    6116 status_manager.go:601] "Failed to get status for pod" podUID=7e1fca3ceff1ea1bbb21731965864899 pod="kube-system/etcd-functional-20211214191315-1964" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-20211214191315-1964\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:11 functional-20211214191315-1964 kubelet[6116]: I1215 03:16:11.081813    6116 status_manager.go:601] "Failed to get status for pod" podUID=158b4d3fe2f0b69d80a4c203dc10174b pod="kube-system/kube-scheduler-functional-20211214191315-1964" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20211214191315-1964\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:11 functional-20211214191315-1964 kubelet[6116]: I1215 03:16:11.082168    6116 status_manager.go:601] "Failed to get status for pod" podUID=1c4fe165-b3ab-48b7-80a8-09a029198ddc pod="kube-system/coredns-78fcd69978-2p7ps" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-78fcd69978-2p7ps\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:11 functional-20211214191315-1964 kubelet[6116]: I1215 03:16:11.082314    6116 status_manager.go:601] "Failed to get status for pod" podUID=3e79e46645c5ff06219b2411b43b9513 pod="kube-system/kube-apiserver-functional-20211214191315-1964" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-20211214191315-1964\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:11 functional-20211214191315-1964 kubelet[6116]: I1215 03:16:11.082439    6116 status_manager.go:601] "Failed to get status for pod" podUID=64059561543026c4018b52111bbdf496 pod="kube-system/kube-controller-manager-functional-20211214191315-1964" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-20211214191315-1964\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:11 functional-20211214191315-1964 kubelet[6116]: I1215 03:16:11.082562    6116 status_manager.go:601] "Failed to get status for pod" podUID=fb70bc64-478f-4146-9565-9aa0691bc521 pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:11 functional-20211214191315-1964 kubelet[6116]: I1215 03:16:11.127308    6116 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Dec 15 03:16:11 functional-20211214191315-1964 kubelet[6116]: I1215 03:16:11.128268    6116 status_manager.go:601] "Failed to get status for pod" podUID=1c4fe165-b3ab-48b7-80a8-09a029198ddc pod="kube-system/coredns-78fcd69978-2p7ps" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-78fcd69978-2p7ps\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:12 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:12.621896    6116 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20211214191315-1964?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Dec 15 03:16:15 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:15.372893    6116 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-functional-20211214191315-1964.16c0cf45d5fd9c36", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-functional-20211214191315-1964", UID:"3e79e46645c5ff06219b2411b43b9513", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"BackOff", Message:"B
ack-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"functional-20211214191315-1964"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc066755cccadde36, ext:4214666564, loc:(*time.Location)(0x77ab6e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc066755cccadde36, ext:4214666564, loc:(*time.Location)(0x77ab6e0)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events": dial tcp 192.168.49.2:8441: connect: connection refused'(may retry after sleeping)
	Dec 15 03:16:15 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:15.815998    6116 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20211214191315-1964?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	
	* 
	* ==> storage-provisioner [ef1934ce8f8a] <==
	* I1215 03:16:02.181752       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1215 03:16:02.194896       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1215 03:16:02.194923       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E1215 03:16:05.655127       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:09.913376       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:13.509098       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:16.553607       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> storage-provisioner [f64a119e0f9e] <==
	* I1215 03:15:10.975949       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1215 03:15:10.984957       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1215 03:15:10.985004       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1215 03:15:10.997867       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1215 03:15:10.997992       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20211214191315-1964_1fec940e-cd00-4f60-b45c-53c04bc9b5a3!
	I1215 03:15:10.997922       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cc75cbeb-f0ab-4a0a-ad48-bee87e673b9d", APIVersion:"v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20211214191315-1964_1fec940e-cd00-4f60-b45c-53c04bc9b5a3 became leader
	I1215 03:15:11.098810       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20211214191315-1964_1fec940e-cd00-4f60-b45c-53c04bc9b5a3!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1214 19:16:15.818177    3999 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8441 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-20211214191315-1964 -n functional-20211214191315-1964
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-20211214191315-1964 -n functional-20211214191315-1964: exit status 2 (633.309679ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:257: "functional-20211214191315-1964" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (32.03s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (11.14s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:789: (dbg) Run:  kubectl --context functional-20211214191315-1964 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:789: (dbg) Done: kubectl --context functional-20211214191315-1964 get po -l tier=control-plane -n kube-system -o=json: (5.926167442s)
functional_test.go:804: etcd phase: Running
functional_test.go:812: etcd is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2021-12-14 19:14:30 -0800 PST ContainerStatuses:[{Name:etcd State:{Waiting:<nil> Running:0xc000f696b0 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc000480000} Ready:false RestartCount:1 Image:k8s.gcr.io/etcd:3.5.0-0 ImageID:docker-pullable://k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d ContainerID:docker://dcc8cf8c80d7fbbea83c2861d783baf8d3893258f335a145f4676cc4084aa3bd}]}
functional_test.go:804: kube-apiserver phase: Pending
functional_test.go:806: kube-apiserver is not Running: {Phase:Pending Conditions:[] Message: Reason: HostIP: PodIP: StartTime:<nil> ContainerStatuses:[]}
functional_test.go:804: kube-controller-manager phase: Running
functional_test.go:812: kube-controller-manager is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2021-12-14 19:14:30 -0800 PST ContainerStatuses:[{Name:kube-controller-manager State:{Waiting:<nil> Running:0xc000f69b60 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc000480070} Ready:false RestartCount:1 Image:k8s.gcr.io/kube-controller-manager:v1.22.4 ImageID:docker-pullable://k8s.gcr.io/kube-controller-manager@sha256:fc31b9bd0c4fae88bb10f87b17d7c81f18278fd99f6e46832c22a6ad4f2a617c ContainerID:docker://8c1c7474f72e24b75e5bcc68fb279757a7a9e6680d4849ea4df4409cdde51a11}]}
functional_test.go:804: kube-scheduler phase: Running
functional_test.go:812: kube-scheduler is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2021-12-14 19:14:30 -0800 PST ContainerStatuses:[{Name:kube-scheduler State:{Waiting:<nil> Running:0xc000f69d58 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc0004800e0} Ready:false RestartCount:1 Image:k8s.gcr.io/kube-scheduler:v1.22.4 ImageID:docker-pullable://k8s.gcr.io/kube-scheduler@sha256:35e7fb6d7e570caa10f9545c46f7c5d852c7c23781efa933d97d1c12dbcd877b ContainerID:docker://7c6987317e15620db34f2a55d88a64905e0cb7d59d1fbe3b7a1058e5eb500315}]}
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect functional-20211214191315-1964
helpers_test.go:236: (dbg) docker inspect functional-20211214191315-1964:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "063d382c672d09e0251ddbbea7e2717b5981f7dcc7111ce59695ed33d31998c1",
	        "Created": "2021-12-15T03:13:27.603409374Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 36751,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-12-15T03:13:36.628247362Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:25dd136318bb473aa79e4b6546ab850b776602a93c650a0488bd51f9337f57cc",
	        "ResolvConfPath": "/var/lib/docker/containers/063d382c672d09e0251ddbbea7e2717b5981f7dcc7111ce59695ed33d31998c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/063d382c672d09e0251ddbbea7e2717b5981f7dcc7111ce59695ed33d31998c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/063d382c672d09e0251ddbbea7e2717b5981f7dcc7111ce59695ed33d31998c1/hosts",
	        "LogPath": "/var/lib/docker/containers/063d382c672d09e0251ddbbea7e2717b5981f7dcc7111ce59695ed33d31998c1/063d382c672d09e0251ddbbea7e2717b5981f7dcc7111ce59695ed33d31998c1-json.log",
	        "Name": "/functional-20211214191315-1964",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20211214191315-1964:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20211214191315-1964",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/de7b6edd911d2cbffaabb6b928787a944c049e957c86fd228fec6c5d6fbe8a2d-init/diff:/var/lib/docker/overlay2/d65d6ee199d533962e560eb1009b9421c35530981cd2b3a5c895e7aa29133ef8/diff:/var/lib/docker/overlay2/cf5813d5f780adc966435adb87967f9ac9286e5cca19b6bffc7faf37a1488181/diff:/var/lib/docker/overlay2/bbf7d73443b3c1f6753765c62119ac0cb4533734f58ea785a07e83bd45fd609b/diff:/var/lib/docker/overlay2/7d3cc6311f7e53b5fe29c271d1dda738d6a6fd4d81dd72b820fa40b56493c076/diff:/var/lib/docker/overlay2/e59bd0b69cd268058ccffb5fb6a21ada51440450dc39f46af8d6ae6a0b58bb88/diff:/var/lib/docker/overlay2/2801fff88f8794874e8eedcda08e1e51151070b8b3446abc9bb5e92a0995b5f9/diff:/var/lib/docker/overlay2/d62b1160868c64ec1baf9e63759f5726132676409fde8648705e09807185e554/diff:/var/lib/docker/overlay2/4cca4926b4249a86bdf8415cf6388c243dde6ce324e7d3ca1e99bfd0b003ad11/diff:/var/lib/docker/overlay2/15154e198d38ee378134c98df7652e12a01ac6a18a3ebac07bac22dde0f70cb2/diff:/var/lib/docker/overlay2/34f5aa
f154bad24983cab59326bd1f9afebc4a0faede5465d9d049edd03752ef/diff:/var/lib/docker/overlay2/d93805ff58045a7635ffb94b9d6b0e525969424603b68bdfe232872070622e29/diff:/var/lib/docker/overlay2/1a84b556261df2d3a163c7dbb1020f88045caf924509e24e2d820ba5d8780b37/diff:/var/lib/docker/overlay2/140a3ef1453300c149d79ee7cb6cbd101478204fafa6f798ead4ff62cdedcea9/diff:/var/lib/docker/overlay2/18ed14d5017caa3b45068784dd46b626a571fbb85523a3dc0961c19e6fa9b052/diff:/var/lib/docker/overlay2/d41f951f52167ee3e8a1479a3e22b5fe6c67cd78372274d636ea1385fbcaff45/diff:/var/lib/docker/overlay2/8d06d7c23fcac42d6d4375c4df7b5cb42aa7dda7c109cd79784ed629e2b99162/diff:/var/lib/docker/overlay2/f7f73d07aef9e7d1cd8d512c0c87187a7c1716177c6b4abb38f7b672fa78bfd7/diff:/var/lib/docker/overlay2/48fbe6c22b155794cba062ea1054747e89ddafdd693d4b0613e11fa7c1078bcb/diff:/var/lib/docker/overlay2/bc52c503a14b163785b1d3d507f12ed2bd756bb56bf0e69cc95abeb09564b53f/diff:/var/lib/docker/overlay2/c7505994e09c79aba0a61d1035017d2c4146a8145da48c793a8b34038e28eba3/diff:/var/lib/d
ocker/overlay2/7c877c41fb0dc251309eeed242ef1d07aab3ce445f8a00bd511510bafadf6190/diff:/var/lib/docker/overlay2/476a1804d5d4872206e789f62720a282f86e8a147ff27052295111051b445bc6/diff:/var/lib/docker/overlay2/9b3dd64375f79358e4a247638ba3b21d75fa9574ea58ae70540c7f8a40c6c7e0/diff:/var/lib/docker/overlay2/5764bedae373ccbfd87a2d93dc8f05063553bbc56f8702001e93a71126611160/diff:/var/lib/docker/overlay2/b1d980593cf4766dd03426b96cea1461ecb9df0ab9e9b0dcb0707f52aca17667/diff:/var/lib/docker/overlay2/f80ab1026afe4bfff3af056bb301eefbdd80b45d07b1ec69d47518d8929ad293/diff:/var/lib/docker/overlay2/6aca893938f71b11291705caf4db29dac4047a08ea7dbe7a890f541acf583c7e/diff:/var/lib/docker/overlay2/2be1be3d25f65a42cfa382d19c19591c016767d1b0e981438e1bc1c1eccbbb26/diff:/var/lib/docker/overlay2/37dc1176e9195ffb3c40ecbbae26ec9cd8f8e10ddb002abe3d8b4ec32429c0cc/diff:/var/lib/docker/overlay2/fb83115e15a1ac8e98ae386d5b286c6bb9c33b19c88c3ee2cb6e56cbfced9101/diff:/var/lib/docker/overlay2/a0a75022c882a65c8f12d99327ab1b4774935514c4100c986b40f26dfa6
57cbc/diff:/var/lib/docker/overlay2/c3d5a8cd2f734ba495b40e924f240b86adbf7127ffcc37a7b46d074f326a92b2/diff:/var/lib/docker/overlay2/e8a4a96d1a43dc42113e8db34f203842db67806cbd8a4c9d4eb308f177e0a639/diff:/var/lib/docker/overlay2/79a5cf9e15648205b381d71c57af6afa5ed3657072b7fd68a7dfaf384165fb88/diff:/var/lib/docker/overlay2/8c3eadf3bf1e807a2fe0d03a3f3074ae928cdc951a7f8e5f12a64cf65a7af79d/diff:/var/lib/docker/overlay2/5bcc190ed22527c90283b3c8a56ecf82ab1af2d24ce87fe599b4b2ac4ec7d9d5/diff:/var/lib/docker/overlay2/a5009febf0a368117ee8e9860233cfe0424c46701611b76625bb6b423cba86ac/diff:/var/lib/docker/overlay2/1ac60e0a0574910c05a20192d5988606665d3101ced6bdbc31b7660cd8431283/diff:/var/lib/docker/overlay2/c6cdf5fd609878026154660951e80c9c6bc61a49cd2d889fbdccea6c8c36d474/diff:/var/lib/docker/overlay2/46c08365e5d94e0fcaca61e53b0d880b1b42b9c1387136f352318dca068deef3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de7b6edd911d2cbffaabb6b928787a944c049e957c86fd228fec6c5d6fbe8a2d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de7b6edd911d2cbffaabb6b928787a944c049e957c86fd228fec6c5d6fbe8a2d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de7b6edd911d2cbffaabb6b928787a944c049e957c86fd228fec6c5d6fbe8a2d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20211214191315-1964",
	                "Source": "/var/lib/docker/volumes/functional-20211214191315-1964/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20211214191315-1964",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20211214191315-1964",
	                "name.minikube.sigs.k8s.io": "functional-20211214191315-1964",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3fb97efd3b1452cfe5e5eb7721362081e1f4c664ecf3cbfd167fbe642693a5ef",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52227"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52228"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52229"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52230"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52226"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3fb97efd3b14",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20211214191315-1964": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "063d382c672d",
	                        "functional-20211214191315-1964"
	                    ],
	                    "NetworkID": "65ae89935c9b6854d4e470dffb3ea0766ae64e3661db1cb2b0a74f255d5cd0c7",
	                    "EndpointID": "0739de465c186d772feba1f3bc4ad525c49028697d0b88cd5f7db71c0a76262a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211214191315-1964 -n functional-20211214191315-1964
helpers_test.go:240: (dbg) Done: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20211214191315-1964 -n functional-20211214191315-1964: (1.304814093s)
helpers_test.go:245: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211214191315-1964 logs -n 25: (3.015049074s)
helpers_test.go:253: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------|--------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                                    Args                                     |            Profile             |  User   | Version |          Start Time           |           End Time            |
	|---------|-----------------------------------------------------------------------------|--------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | nospam-20211214191129-1964 --log_dir                                        | nospam-20211214191129-1964     | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:12:48 PST | Tue, 14 Dec 2021 19:12:49 PST |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 |                                |         |         |                               |                               |
	|         | unpause                                                                     |                                |         |         |                               |                               |
	| -p      | nospam-20211214191129-1964 --log_dir                                        | nospam-20211214191129-1964     | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:12:49 PST | Tue, 14 Dec 2021 19:12:50 PST |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 |                                |         |         |                               |                               |
	|         | unpause                                                                     |                                |         |         |                               |                               |
	| -p      | nospam-20211214191129-1964 --log_dir                                        | nospam-20211214191129-1964     | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:12:50 PST | Tue, 14 Dec 2021 19:12:50 PST |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 |                                |         |         |                               |                               |
	|         | unpause                                                                     |                                |         |         |                               |                               |
	| -p      | nospam-20211214191129-1964 --log_dir                                        | nospam-20211214191129-1964     | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:12:50 PST | Tue, 14 Dec 2021 19:13:08 PST |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 |                                |         |         |                               |                               |
	|         | stop                                                                        |                                |         |         |                               |                               |
	| -p      | nospam-20211214191129-1964 --log_dir                                        | nospam-20211214191129-1964     | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:13:08 PST | Tue, 14 Dec 2021 19:13:08 PST |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 |                                |         |         |                               |                               |
	|         | stop                                                                        |                                |         |         |                               |                               |
	| -p      | nospam-20211214191129-1964 --log_dir                                        | nospam-20211214191129-1964     | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:13:08 PST | Tue, 14 Dec 2021 19:13:08 PST |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 |                                |         |         |                               |                               |
	|         | stop                                                                        |                                |         |         |                               |                               |
	| delete  | -p nospam-20211214191129-1964                                               | nospam-20211214191129-1964     | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:13:09 PST | Tue, 14 Dec 2021 19:13:15 PST |
	| start   | -p                                                                          | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:13:15 PST | Tue, 14 Dec 2021 19:15:19 PST |
	|         | functional-20211214191315-1964                                              |                                |         |         |                               |                               |
	|         | --memory=4000                                                               |                                |         |         |                               |                               |
	|         | --apiserver-port=8441                                                       |                                |         |         |                               |                               |
	|         | --wait=all --driver=docker                                                  |                                |         |         |                               |                               |
	| start   | -p                                                                          | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:19 PST | Tue, 14 Dec 2021 19:15:26 PST |
	|         | functional-20211214191315-1964                                              |                                |         |         |                               |                               |
	|         | --alsologtostderr -v=8                                                      |                                |         |         |                               |                               |
	| -p      | functional-20211214191315-1964                                              | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:28 PST | Tue, 14 Dec 2021 19:15:30 PST |
	|         | cache add k8s.gcr.io/pause:3.1                                              |                                |         |         |                               |                               |
	| -p      | functional-20211214191315-1964                                              | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:30 PST | Tue, 14 Dec 2021 19:15:34 PST |
	|         | cache add k8s.gcr.io/pause:3.3                                              |                                |         |         |                               |                               |
	| -p      | functional-20211214191315-1964                                              | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:34 PST | Tue, 14 Dec 2021 19:15:37 PST |
	|         | cache add                                                                   |                                |         |         |                               |                               |
	|         | k8s.gcr.io/pause:latest                                                     |                                |         |         |                               |                               |
	| -p      | functional-20211214191315-1964 cache add                                    | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:38 PST | Tue, 14 Dec 2021 19:15:39 PST |
	|         | minikube-local-cache-test:functional-20211214191315-1964                    |                                |         |         |                               |                               |
	| -p      | functional-20211214191315-1964 cache delete                                 | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:39 PST | Tue, 14 Dec 2021 19:15:39 PST |
	|         | minikube-local-cache-test:functional-20211214191315-1964                    |                                |         |         |                               |                               |
	| cache   | delete k8s.gcr.io/pause:3.3                                                 | minikube                       | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:40 PST | Tue, 14 Dec 2021 19:15:40 PST |
	| cache   | list                                                                        | minikube                       | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:40 PST | Tue, 14 Dec 2021 19:15:40 PST |
	| -p      | functional-20211214191315-1964                                              | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:40 PST | Tue, 14 Dec 2021 19:15:40 PST |
	|         | ssh sudo crictl images                                                      |                                |         |         |                               |                               |
	| -p      | functional-20211214191315-1964                                              | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:40 PST | Tue, 14 Dec 2021 19:15:41 PST |
	|         | ssh sudo docker rmi                                                         |                                |         |         |                               |                               |
	|         | k8s.gcr.io/pause:latest                                                     |                                |         |         |                               |                               |
	| -p      | functional-20211214191315-1964                                              | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:42 PST | Tue, 14 Dec 2021 19:15:44 PST |
	|         | cache reload                                                                |                                |         |         |                               |                               |
	| -p      | functional-20211214191315-1964                                              | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:44 PST | Tue, 14 Dec 2021 19:15:44 PST |
	|         | ssh sudo crictl inspecti                                                    |                                |         |         |                               |                               |
	|         | k8s.gcr.io/pause:latest                                                     |                                |         |         |                               |                               |
	| cache   | delete k8s.gcr.io/pause:3.1                                                 | minikube                       | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:44 PST | Tue, 14 Dec 2021 19:15:44 PST |
	| cache   | delete k8s.gcr.io/pause:latest                                              | minikube                       | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:44 PST | Tue, 14 Dec 2021 19:15:44 PST |
	| -p      | functional-20211214191315-1964                                              | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:45 PST | Tue, 14 Dec 2021 19:15:45 PST |
	|         | kubectl -- --context                                                        |                                |         |         |                               |                               |
	|         | functional-20211214191315-1964                                              |                                |         |         |                               |                               |
	|         | get pods                                                                    |                                |         |         |                               |                               |
	| kubectl | --profile=functional-20211214191315-1964                                    | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:15:45 PST | Tue, 14 Dec 2021 19:15:45 PST |
	|         | -- --context                                                                |                                |         |         |                               |                               |
	|         | functional-20211214191315-1964 get pods                                     |                                |         |         |                               |                               |
	| -p      | functional-20211214191315-1964                                              | functional-20211214191315-1964 | jenkins | v1.24.0 | Tue, 14 Dec 2021 19:16:14 PST | Tue, 14 Dec 2021 19:16:17 PST |
	|         | logs -n 25                                                                  |                                |         |         |                               |                               |
	|---------|-----------------------------------------------------------------------------|--------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/14 19:15:46
	Running on machine: 37309
	Binary: Built with gc go1.17.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1214 19:15:46.489738    3861 out.go:297] Setting OutFile to fd 1 ...
	I1214 19:15:46.489863    3861 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1214 19:15:46.489866    3861 out.go:310] Setting ErrFile to fd 2...
	I1214 19:15:46.489868    3861 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1214 19:15:46.489939    3861 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/bin
	I1214 19:15:46.490185    3861 out.go:304] Setting JSON to false
	I1214 19:15:46.514010    3861 start.go:112] hostinfo: {"hostname":"37309.local","uptime":922,"bootTime":1639537224,"procs":316,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1214 19:15:46.514091    3861 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1214 19:15:46.541174    3861 out.go:176] * [functional-20211214191315-1964] minikube v1.24.0 on Darwin 11.2.3
	I1214 19:15:46.541353    3861 notify.go:174] Checking for updates...
	I1214 19:15:46.589761    3861 out.go:176]   - MINIKUBE_LOCATION=13173
	I1214 19:15:46.622946    3861 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig
	I1214 19:15:46.648157    3861 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1214 19:15:46.674324    3861 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube
	I1214 19:15:46.674728    3861 config.go:176] Loaded profile config "functional-20211214191315-1964": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4
	I1214 19:15:46.674762    3861 driver.go:344] Setting default libvirt URI to qemu:///system
	I1214 19:15:46.770775    3861 docker.go:132] docker version: linux-20.10.6
	I1214 19:15:46.770915    3861 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1214 19:15:46.952052    3861 info.go:263] docker info: {ID:5AO3:Q7BV:QPO2:IORE:2FWE:BSI4:OSEF:34WA:NLU4:XM3Q:JID7:HR3K Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:53 SystemTime:2021-12-15 03:15:46.880401842 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1214 19:15:47.000605    3861 out.go:176] * Using the docker driver based on existing profile
	I1214 19:15:47.000656    3861 start.go:280] selected driver: docker
	I1214 19:15:47.000664    3861 start.go:795] validating driver "docker" against &{Name:functional-20211214191315-1964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4 ClusterName:functional-20211214191315-1964 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.4 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddon
Images:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1214 19:15:47.000799    3861 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1214 19:15:47.001188    3861 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1214 19:15:47.187824    3861 info.go:263] docker info: {ID:5AO3:Q7BV:QPO2:IORE:2FWE:BSI4:OSEF:34WA:NLU4:XM3Q:JID7:HR3K Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:53 SystemTime:2021-12-15 03:15:47.114916447 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1214 19:15:47.189994    3861 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1214 19:15:47.190026    3861 cni.go:93] Creating CNI manager for ""
	I1214 19:15:47.190033    3861 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1214 19:15:47.190044    3861 start_flags.go:298] config:
	{Name:functional-20211214191315-1964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4 ClusterName:functional-20211214191315-1964 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.4 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonI
mages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1214 19:15:47.236508    3861 out.go:176] * Starting control plane node functional-20211214191315-1964 in cluster functional-20211214191315-1964
	I1214 19:15:47.236656    3861 cache.go:118] Beginning downloading kic base image for docker with docker
	I1214 19:15:47.262553    3861 out.go:176] * Pulling base image ...
	I1214 19:15:47.262649    3861 preload.go:132] Checking if preload exists for k8s version v1.22.4 and runtime docker
	I1214 19:15:47.262668    3861 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab in local docker daemon
	I1214 19:15:47.262728    3861 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.22.4-docker-overlay2-amd64.tar.lz4
	I1214 19:15:47.262748    3861 cache.go:57] Caching tarball of preloaded images
	I1214 19:15:47.262970    3861 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.22.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1214 19:15:47.262986    3861 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.4 on docker
	I1214 19:15:47.264094    3861 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/config.json ...
	I1214 19:15:47.379931    3861 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab in local docker daemon, skipping pull
	I1214 19:15:47.379943    3861 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab exists in daemon, skipping load
	I1214 19:15:47.379953    3861 cache.go:206] Successfully downloaded all kic artifacts
	I1214 19:15:47.379988    3861 start.go:313] acquiring machines lock for functional-20211214191315-1964: {Name:mk594d6742213ef916a69c00c22eea3f8bde6474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1214 19:15:47.380072    3861 start.go:317] acquired machines lock for "functional-20211214191315-1964" in 68.787µs
	I1214 19:15:47.380095    3861 start.go:93] Skipping create...Using existing machine configuration
	I1214 19:15:47.380101    3861 fix.go:55] fixHost starting: 
	I1214 19:15:47.380343    3861 cli_runner.go:115] Run: docker container inspect functional-20211214191315-1964 --format={{.State.Status}}
	I1214 19:15:47.495263    3861 fix.go:108] recreateIfNeeded on functional-20211214191315-1964: state=Running err=<nil>
	W1214 19:15:47.495308    3861 fix.go:134] unexpected machine state, will restart: <nil>
	I1214 19:15:47.523096    3861 out.go:176] * Updating the running docker "functional-20211214191315-1964" container ...
	I1214 19:15:47.523125    3861 machine.go:88] provisioning docker machine ...
	I1214 19:15:47.523151    3861 ubuntu.go:169] provisioning hostname "functional-20211214191315-1964"
	I1214 19:15:47.523256    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:15:47.637934    3861 main.go:130] libmachine: Using SSH client type: native
	I1214 19:15:47.638115    3861 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397120] 0x139a200 <nil>  [] 0s} 127.0.0.1 52227 <nil> <nil>}
	I1214 19:15:47.638126    3861 main.go:130] libmachine: About to run SSH command:
	sudo hostname functional-20211214191315-1964 && echo "functional-20211214191315-1964" | sudo tee /etc/hostname
	I1214 19:15:47.773226    3861 main.go:130] libmachine: SSH cmd err, output: <nil>: functional-20211214191315-1964
	
	I1214 19:15:47.773342    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:15:47.890603    3861 main.go:130] libmachine: Using SSH client type: native
	I1214 19:15:47.890754    3861 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397120] 0x139a200 <nil>  [] 0s} 127.0.0.1 52227 <nil> <nil>}
	I1214 19:15:47.890765    3861 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-20211214191315-1964' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-20211214191315-1964/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-20211214191315-1964' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1214 19:15:48.017844    3861 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1214 19:15:48.017858    3861 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube}
	I1214 19:15:48.017888    3861 ubuntu.go:177] setting up certificates
	I1214 19:15:48.017897    3861 provision.go:83] configureAuth start
	I1214 19:15:48.017992    3861 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20211214191315-1964
	I1214 19:15:48.133762    3861 provision.go:138] copyHostCerts
	I1214 19:15:48.133850    3861 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.pem, removing ...
	I1214 19:15:48.133857    3861 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.pem
	I1214 19:15:48.133981    3861 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.pem (1078 bytes)
	I1214 19:15:48.134199    3861 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cert.pem, removing ...
	I1214 19:15:48.134208    3861 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cert.pem
	I1214 19:15:48.134265    3861 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cert.pem (1123 bytes)
	I1214 19:15:48.134407    3861 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/key.pem, removing ...
	I1214 19:15:48.134410    3861 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/key.pem
	I1214 19:15:48.134465    3861 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/key.pem (1675 bytes)
	I1214 19:15:48.134602    3861 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca-key.pem org=jenkins.functional-20211214191315-1964 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-20211214191315-1964]
	I1214 19:15:48.541072    3861 provision.go:172] copyRemoteCerts
	I1214 19:15:48.541137    3861 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1214 19:15:48.541196    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:15:48.657603    3861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52227 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/functional-20211214191315-1964/id_rsa Username:docker}
	I1214 19:15:48.746055    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1214 19:15:48.762452    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1214 19:15:48.779860    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I1214 19:15:48.796463    3861 provision.go:86] duration metric: configureAuth took 777.886445ms
	I1214 19:15:48.796472    3861 ubuntu.go:193] setting minikube options for container-runtime
	I1214 19:15:48.796652    3861 config.go:176] Loaded profile config "functional-20211214191315-1964": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4
	I1214 19:15:48.796741    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:15:48.914868    3861 main.go:130] libmachine: Using SSH client type: native
	I1214 19:15:48.914998    3861 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397120] 0x139a200 <nil>  [] 0s} 127.0.0.1 52227 <nil> <nil>}
	I1214 19:15:48.915005    3861 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1214 19:15:49.039119    3861 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1214 19:15:49.039132    3861 ubuntu.go:71] root file system type: overlay
	I1214 19:15:49.039306    3861 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1214 19:15:49.039408    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:15:49.157332    3861 main.go:130] libmachine: Using SSH client type: native
	I1214 19:15:49.157488    3861 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397120] 0x139a200 <nil>  [] 0s} 127.0.0.1 52227 <nil> <nil>}
	I1214 19:15:49.157530    3861 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1214 19:15:49.294842    3861 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1214 19:15:49.294961    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:15:49.412154    3861 main.go:130] libmachine: Using SSH client type: native
	I1214 19:15:49.412300    3861 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397120] 0x139a200 <nil>  [] 0s} 127.0.0.1 52227 <nil> <nil>}
	I1214 19:15:49.412310    3861 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1214 19:15:49.542453    3861 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1214 19:15:49.542461    3861 machine.go:91] provisioned docker machine in 2.017610825s
	I1214 19:15:49.542470    3861 start.go:267] post-start starting for "functional-20211214191315-1964" (driver="docker")
	I1214 19:15:49.542473    3861 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1214 19:15:49.542543    3861 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1214 19:15:49.542600    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:15:49.658775    3861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52227 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/functional-20211214191315-1964/id_rsa Username:docker}
	I1214 19:15:49.752882    3861 ssh_runner.go:195] Run: cat /etc/os-release
	I1214 19:15:49.756552    3861 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1214 19:15:49.756564    3861 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1214 19:15:49.756575    3861 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1214 19:15:49.756579    3861 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1214 19:15:49.756587    3861 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/addons for local assets ...
	I1214 19:15:49.756679    3861 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files for local assets ...
	I1214 19:15:49.756822    3861 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files/etc/ssl/certs/19642.pem -> 19642.pem in /etc/ssl/certs
	I1214 19:15:49.756970    3861 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files/etc/test/nested/copy/1964/hosts -> hosts in /etc/test/nested/copy/1964
	I1214 19:15:49.757015    3861 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1964
	I1214 19:15:49.764068    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files/etc/ssl/certs/19642.pem --> /etc/ssl/certs/19642.pem (1708 bytes)
	I1214 19:15:49.780381    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files/etc/test/nested/copy/1964/hosts --> /etc/test/nested/copy/1964/hosts (40 bytes)
	I1214 19:15:49.796511    3861 start.go:270] post-start completed in 253.836206ms
	I1214 19:15:49.796584    3861 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1214 19:15:49.796644    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:15:49.911092    3861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52227 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/functional-20211214191315-1964/id_rsa Username:docker}
	I1214 19:15:49.998268    3861 fix.go:57] fixHost completed within 2.615958738s
	I1214 19:15:49.998282    3861 start.go:80] releasing machines lock for "functional-20211214191315-1964", held for 2.615998134s
	I1214 19:15:49.998389    3861 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20211214191315-1964
	I1214 19:15:50.196870    3861 ssh_runner.go:195] Run: systemctl --version
	I1214 19:15:50.196870    3861 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1214 19:15:50.196940    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:15:50.196952    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:15:50.426963    3861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52227 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/functional-20211214191315-1964/id_rsa Username:docker}
	I1214 19:15:50.426970    3861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52227 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/functional-20211214191315-1964/id_rsa Username:docker}
	I1214 19:15:50.980157    3861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1214 19:15:50.989754    3861 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1214 19:15:50.999117    3861 cruntime.go:257] skipping containerd shutdown because we are bound to it
	I1214 19:15:50.999175    3861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1214 19:15:51.008537    3861 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1214 19:15:51.020804    3861 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1214 19:15:51.097518    3861 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1214 19:15:51.170273    3861 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1214 19:15:51.179891    3861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1214 19:15:51.253682    3861 ssh_runner.go:195] Run: sudo systemctl start docker
	I1214 19:15:51.263183    3861 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1214 19:15:51.302226    3861 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1214 19:15:51.434269    3861 out.go:203] * Preparing Kubernetes v1.22.4 on Docker 20.10.11 ...
	I1214 19:15:51.434469    3861 cli_runner.go:115] Run: docker exec -t functional-20211214191315-1964 dig +short host.docker.internal
	I1214 19:15:51.715372    3861 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1214 19:15:51.715454    3861 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1214 19:15:51.719934    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:15:51.892911    3861 out.go:176]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1214 19:15:51.893078    3861 preload.go:132] Checking if preload exists for k8s version v1.22.4 and runtime docker
	I1214 19:15:51.893230    3861 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1214 19:15:51.926769    3861 docker.go:558] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-20211214191315-1964
	k8s.gcr.io/kube-apiserver:v1.22.4
	k8s.gcr.io/kube-scheduler:v1.22.4
	k8s.gcr.io/kube-controller-manager:v1.22.4
	k8s.gcr.io/kube-proxy:v1.22.4
	kubernetesui/dashboard:v2.3.1
	k8s.gcr.io/etcd:3.5.0-0
	kubernetesui/metrics-scraper:v1.0.7
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	k8s.gcr.io/pause:3.3
	k8s.gcr.io/pause:3.1
	k8s.gcr.io/pause:latest
	
	-- /stdout --
	I1214 19:15:51.926779    3861 docker.go:489] Images already preloaded, skipping extraction
	I1214 19:15:51.926867    3861 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1214 19:15:51.957882    3861 docker.go:558] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-20211214191315-1964
	k8s.gcr.io/kube-apiserver:v1.22.4
	k8s.gcr.io/kube-scheduler:v1.22.4
	k8s.gcr.io/kube-controller-manager:v1.22.4
	k8s.gcr.io/kube-proxy:v1.22.4
	kubernetesui/dashboard:v2.3.1
	k8s.gcr.io/etcd:3.5.0-0
	kubernetesui/metrics-scraper:v1.0.7
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	k8s.gcr.io/pause:3.3
	k8s.gcr.io/pause:3.1
	k8s.gcr.io/pause:latest
	
	-- /stdout --
	I1214 19:15:51.957898    3861 cache_images.go:79] Images are preloaded, skipping loading
	I1214 19:15:51.957989    3861 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1214 19:15:52.077868    3861 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1214 19:15:52.077893    3861 cni.go:93] Creating CNI manager for ""
	I1214 19:15:52.077901    3861 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1214 19:15:52.077908    3861 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1214 19:15:52.077920    3861 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.22.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-20211214191315-1964 NodeName:functional-20211214191315-1964 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1214 19:15:52.078021    3861 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "functional-20211214191315-1964"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1214 19:15:52.078100    3861 kubeadm.go:927] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=functional-20211214191315-1964 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.4 ClusterName:functional-20211214191315-1964 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I1214 19:15:52.078158    3861 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.22.4
	I1214 19:15:52.086126    3861 binaries.go:44] Found k8s binaries, skipping transfer
	I1214 19:15:52.086185    3861 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1214 19:15:52.093235    3861 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (356 bytes)
	I1214 19:15:52.105816    3861 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1214 19:15:52.118253    3861 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1923 bytes)
	I1214 19:15:52.131019    3861 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1214 19:15:52.135028    3861 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964 for IP: 192.168.49.2
	I1214 19:15:52.152083    3861 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.key
	I1214 19:15:52.152139    3861 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/proxy-client-ca.key
	I1214 19:15:52.152233    3861 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.key
	I1214 19:15:52.152297    3861 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/apiserver.key.dd3b5fb2
	I1214 19:15:52.152352    3861 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/proxy-client.key
	I1214 19:15:52.152569    3861 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/1964.pem (1338 bytes)
	W1214 19:15:52.152684    3861 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/1964_empty.pem, impossibly tiny 0 bytes
	I1214 19:15:52.152696    3861 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca-key.pem (1675 bytes)
	I1214 19:15:52.152735    3861 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca.pem (1078 bytes)
	I1214 19:15:52.152769    3861 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/cert.pem (1123 bytes)
	I1214 19:15:52.152802    3861 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/key.pem (1675 bytes)
	I1214 19:15:52.152877    3861 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files/etc/ssl/certs/19642.pem (1708 bytes)
	I1214 19:15:52.153781    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1214 19:15:52.172993    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1214 19:15:52.190202    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1214 19:15:52.208351    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1214 19:15:52.225564    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1214 19:15:52.242692    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1214 19:15:52.259357    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1214 19:15:52.276377    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1214 19:15:52.293188    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1214 19:15:52.311980    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/1964.pem --> /usr/share/ca-certificates/1964.pem (1338 bytes)
	I1214 19:15:52.330188    3861 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files/etc/ssl/certs/19642.pem --> /usr/share/ca-certificates/19642.pem (1708 bytes)
	I1214 19:15:52.347120    3861 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1214 19:15:52.360307    3861 ssh_runner.go:195] Run: openssl version
	I1214 19:15:52.365854    3861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1214 19:15:52.374169    3861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1214 19:15:52.378721    3861 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 15 03:07 /usr/share/ca-certificates/minikubeCA.pem
	I1214 19:15:52.378783    3861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1214 19:15:52.384365    3861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1214 19:15:52.392054    3861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1964.pem && ln -fs /usr/share/ca-certificates/1964.pem /etc/ssl/certs/1964.pem"
	I1214 19:15:52.399734    3861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1964.pem
	I1214 19:15:52.403583    3861 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 15 03:13 /usr/share/ca-certificates/1964.pem
	I1214 19:15:52.403626    3861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1964.pem
	I1214 19:15:52.409201    3861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1964.pem /etc/ssl/certs/51391683.0"
	I1214 19:15:52.417050    3861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19642.pem && ln -fs /usr/share/ca-certificates/19642.pem /etc/ssl/certs/19642.pem"
	I1214 19:15:52.425296    3861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19642.pem
	I1214 19:15:52.429475    3861 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 15 03:13 /usr/share/ca-certificates/19642.pem
	I1214 19:15:52.429526    3861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19642.pem
	I1214 19:15:52.435155    3861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19642.pem /etc/ssl/certs/3ec20f2e.0"
	I1214 19:15:52.457896    3861 kubeadm.go:390] StartCluster: {Name:functional-20211214191315-1964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4 ClusterName:functional-20211214191315-1964 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.4 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-p
rovisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1214 19:15:52.458126    3861 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1214 19:15:52.488752    3861 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1214 19:15:52.496530    3861 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I1214 19:15:52.496537    3861 kubeadm.go:600] restartCluster start
	I1214 19:15:52.496594    3861 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1214 19:15:52.503469    3861 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1214 19:15:52.503541    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:15:52.681904    3861 kubeconfig.go:92] found "functional-20211214191315-1964" server: "https://127.0.0.1:52226"
	I1214 19:15:52.684983    3861 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1214 19:15:52.692886    3861 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2021-12-15 03:14:08.651658086 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2021-12-15 03:15:52.128043025 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I1214 19:15:52.692894    3861 kubeadm.go:1050] stopping kube-system containers ...
	I1214 19:15:52.692982    3861 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1214 19:15:52.723325    3861 docker.go:390] Stopping containers: [f64a119e0f9e 2aa4dddf31ca d3053162ed40 e710431b012a 04a83caa39cf f7375b96ea70 b51d2e575c77 cf796b7222f1 98cdd8174af9 80134238c59c cfc1a8507ed7 7a8aaa408242 7e979a288049 9168974b805c ed86ade71b65]
	I1214 19:15:52.723433    3861 ssh_runner.go:195] Run: docker stop f64a119e0f9e 2aa4dddf31ca d3053162ed40 e710431b012a 04a83caa39cf f7375b96ea70 b51d2e575c77 cf796b7222f1 98cdd8174af9 80134238c59c cfc1a8507ed7 7a8aaa408242 7e979a288049 9168974b805c ed86ade71b65
	I1214 19:15:57.934541    3861 ssh_runner.go:235] Completed: docker stop f64a119e0f9e 2aa4dddf31ca d3053162ed40 e710431b012a 04a83caa39cf f7375b96ea70 b51d2e575c77 cf796b7222f1 98cdd8174af9 80134238c59c cfc1a8507ed7 7a8aaa408242 7e979a288049 9168974b805c ed86ade71b65: (5.20820415s)
	I1214 19:15:57.934629    3861 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1214 19:15:57.972652    3861 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1214 19:15:57.980659    3861 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5639 Dec 15 03:14 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec 15 03:14 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Dec 15 03:14 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 15 03:14 /etc/kubernetes/scheduler.conf
	
	I1214 19:15:57.980715    3861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1214 19:15:57.988271    3861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1214 19:15:57.997014    3861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1214 19:15:58.004328    3861 kubeadm.go:165] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1214 19:15:58.004390    3861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1214 19:15:58.011437    3861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1214 19:15:58.018629    3861 kubeadm.go:165] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1214 19:15:58.018684    3861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1214 19:15:58.025642    3861 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1214 19:15:58.033024    3861 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1214 19:15:58.033031    3861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1214 19:15:58.080004    3861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1214 19:15:58.868511    3861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1214 19:15:58.998109    3861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1214 19:15:59.051261    3861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1214 19:15:59.108180    3861 api_server.go:51] waiting for apiserver process to appear ...
	I1214 19:15:59.108249    3861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1214 19:15:59.124246    3861 api_server.go:71] duration metric: took 16.065396ms to wait for apiserver process to appear ...
	I1214 19:15:59.124258    3861 api_server.go:87] waiting for apiserver healthz status ...
	I1214 19:15:59.124267    3861 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52226/healthz ...
	I1214 19:15:59.129903    3861 api_server.go:266] https://127.0.0.1:52226/healthz returned 200:
	ok
	I1214 19:15:59.137061    3861 api_server.go:140] control plane version: v1.22.4
	I1214 19:15:59.137069    3861 api_server.go:130] duration metric: took 12.803292ms to wait for apiserver health ...
	I1214 19:15:59.137074    3861 cni.go:93] Creating CNI manager for ""
	I1214 19:15:59.137078    3861 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1214 19:15:59.137085    3861 system_pods.go:43] waiting for kube-system pods to appear ...
	I1214 19:15:59.145722    3861 system_pods.go:59] 7 kube-system pods found
	I1214 19:15:59.145734    3861 system_pods.go:61] "coredns-78fcd69978-2p7ps" [1c4fe165-b3ab-48b7-80a8-09a029198ddc] Running
	I1214 19:15:59.145742    3861 system_pods.go:61] "etcd-functional-20211214191315-1964" [bc9932a2-b1d8-444e-a9d9-ee1cdaae56a6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1214 19:15:59.145746    3861 system_pods.go:61] "kube-apiserver-functional-20211214191315-1964" [d931a160-85af-4b99-b6f4-768eb20bbd53] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1214 19:15:59.145754    3861 system_pods.go:61] "kube-controller-manager-functional-20211214191315-1964" [f8cb9db7-809e-4767-aec3-a519992abf8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1214 19:15:59.145756    3861 system_pods.go:61] "kube-proxy-gxjj9" [2c8a5259-a85d-419b-a99e-9db6b5bb9984] Running
	I1214 19:15:59.145759    3861 system_pods.go:61] "kube-scheduler-functional-20211214191315-1964" [dcf4179d-7ecb-4ab2-9231-0b61695280eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1214 19:15:59.145762    3861 system_pods.go:61] "storage-provisioner" [fb70bc64-478f-4146-9565-9aa0691bc521] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1214 19:15:59.145765    3861 system_pods.go:74] duration metric: took 8.67376ms to wait for pod list to return data ...
	I1214 19:15:59.145768    3861 node_conditions.go:102] verifying NodePressure condition ...
	I1214 19:15:59.149042    3861 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I1214 19:15:59.149052    3861 node_conditions.go:123] node cpu capacity is 6
	I1214 19:15:59.149058    3861 node_conditions.go:105] duration metric: took 3.286769ms to run NodePressure ...
	I1214 19:15:59.149067    3861 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1214 19:15:59.488621    3861 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I1214 19:15:59.492993    3861 kubeadm.go:746] kubelet initialised
	I1214 19:15:59.492998    3861 kubeadm.go:747] duration metric: took 4.367716ms waiting for restarted kubelet to initialise ...
	I1214 19:15:59.493002    3861 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1214 19:15:59.498307    3861 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-2p7ps" in "kube-system" namespace to be "Ready" ...
	I1214 19:15:59.509737    3861 pod_ready.go:92] pod "coredns-78fcd69978-2p7ps" in "kube-system" namespace has status "Ready":"True"
	I1214 19:15:59.509741    3861 pod_ready.go:81] duration metric: took 11.418663ms waiting for pod "coredns-78fcd69978-2p7ps" in "kube-system" namespace to be "Ready" ...
	I1214 19:15:59.509747    3861 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-20211214191315-1964" in "kube-system" namespace to be "Ready" ...
	I1214 19:16:00.027411    3861 pod_ready.go:97] node "functional-20211214191315-1964" hosting pod "etcd-functional-20211214191315-1964" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211214191315-1964" has status "Ready":"False"
	I1214 19:16:00.027420    3861 pod_ready.go:81] duration metric: took 517.458244ms waiting for pod "etcd-functional-20211214191315-1964" in "kube-system" namespace to be "Ready" ...
	E1214 19:16:00.027425    3861 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211214191315-1964" hosting pod "etcd-functional-20211214191315-1964" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211214191315-1964" has status "Ready":"False"
	I1214 19:16:00.027437    3861 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-20211214191315-1964" in "kube-system" namespace to be "Ready" ...
	I1214 19:16:00.031880    3861 pod_ready.go:97] node "functional-20211214191315-1964" hosting pod "kube-apiserver-functional-20211214191315-1964" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211214191315-1964" has status "Ready":"False"
	I1214 19:16:00.031889    3861 pod_ready.go:81] duration metric: took 4.444756ms waiting for pod "kube-apiserver-functional-20211214191315-1964" in "kube-system" namespace to be "Ready" ...
	E1214 19:16:00.031893    3861 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211214191315-1964" hosting pod "kube-apiserver-functional-20211214191315-1964" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211214191315-1964" has status "Ready":"False"
	I1214 19:16:00.031901    3861 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-20211214191315-1964" in "kube-system" namespace to be "Ready" ...
	I1214 19:16:00.036881    3861 pod_ready.go:97] node "functional-20211214191315-1964" hosting pod "kube-controller-manager-functional-20211214191315-1964" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211214191315-1964" has status "Ready":"False"
	I1214 19:16:00.036894    3861 pod_ready.go:81] duration metric: took 4.986954ms waiting for pod "kube-controller-manager-functional-20211214191315-1964" in "kube-system" namespace to be "Ready" ...
	E1214 19:16:00.036899    3861 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211214191315-1964" hosting pod "kube-controller-manager-functional-20211214191315-1964" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211214191315-1964" has status "Ready":"False"
	I1214 19:16:00.036907    3861 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gxjj9" in "kube-system" namespace to be "Ready" ...
	I1214 19:16:00.348026    3861 pod_ready.go:97] node "functional-20211214191315-1964" hosting pod "kube-proxy-gxjj9" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211214191315-1964" has status "Ready":"False"
	I1214 19:16:00.348036    3861 pod_ready.go:81] duration metric: took 310.99947ms waiting for pod "kube-proxy-gxjj9" in "kube-system" namespace to be "Ready" ...
	E1214 19:16:00.348041    3861 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211214191315-1964" hosting pod "kube-proxy-gxjj9" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211214191315-1964" has status "Ready":"False"
	I1214 19:16:00.348049    3861 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-20211214191315-1964" in "kube-system" namespace to be "Ready" ...
	I1214 19:16:00.741441    3861 pod_ready.go:97] node "functional-20211214191315-1964" hosting pod "kube-scheduler-functional-20211214191315-1964" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211214191315-1964" has status "Ready":"False"
	I1214 19:16:00.741452    3861 pod_ready.go:81] duration metric: took 393.246563ms waiting for pod "kube-scheduler-functional-20211214191315-1964" in "kube-system" namespace to be "Ready" ...
	E1214 19:16:00.741459    3861 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20211214191315-1964" hosting pod "kube-scheduler-functional-20211214191315-1964" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20211214191315-1964" has status "Ready":"False"
	I1214 19:16:00.741470    3861 pod_ready.go:38] duration metric: took 1.247963569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1214 19:16:00.741484    3861 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1214 19:16:00.754546    3861 ops.go:34] apiserver oom_adj: -16
	I1214 19:16:00.754552    3861 kubeadm.go:604] restartCluster took 8.253811473s
	I1214 19:16:00.754560    3861 kubeadm.go:392] StartCluster complete in 8.292472061s
	I1214 19:16:00.754570    3861 settings.go:142] acquiring lock: {Name:mk93abdcbbc46dc3353c37938fd5d548af35ef3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1214 19:16:00.754655    3861 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig
	I1214 19:16:00.755134    3861 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig: {Name:mk605b877d3a6907cdf2ed75edbb40b36491c1e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1214 19:16:00.761166    3861 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "functional-20211214191315-1964" rescaled to 1
	I1214 19:16:00.761189    3861 start.go:207] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.4 ControlPlane:true Worker:true}
	I1214 19:16:00.761208    3861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1214 19:16:00.809498    3861 out.go:176] * Verifying Kubernetes components...
	I1214 19:16:00.761223    3861 addons.go:415] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I1214 19:16:00.761356    3861 config.go:176] Loaded profile config "functional-20211214191315-1964": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4
	I1214 19:16:00.809618    3861 addons.go:65] Setting storage-provisioner=true in profile "functional-20211214191315-1964"
	I1214 19:16:00.809623    3861 addons.go:65] Setting default-storageclass=true in profile "functional-20211214191315-1964"
	I1214 19:16:00.809639    3861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1214 19:16:00.809646    3861 addons.go:153] Setting addon storage-provisioner=true in "functional-20211214191315-1964"
	W1214 19:16:00.809652    3861 addons.go:165] addon storage-provisioner should already be in state true
	I1214 19:16:00.809666    3861 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-20211214191315-1964"
	I1214 19:16:00.809716    3861 host.go:66] Checking if "functional-20211214191315-1964" exists ...
	I1214 19:16:00.810235    3861 cli_runner.go:115] Run: docker container inspect functional-20211214191315-1964 --format={{.State.Status}}
	I1214 19:16:00.810426    3861 cli_runner.go:115] Run: docker container inspect functional-20211214191315-1964 --format={{.State.Status}}
	I1214 19:16:00.824519    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:16:00.876037    3861 start.go:754] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1214 19:16:00.959391    3861 addons.go:153] Setting addon default-storageclass=true in "functional-20211214191315-1964"
	I1214 19:16:00.974938    3861 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1214 19:16:00.974949    3861 addons.go:165] addon default-storageclass should already be in state true
	I1214 19:16:00.975012    3861 host.go:66] Checking if "functional-20211214191315-1964" exists ...
	I1214 19:16:00.975052    3861 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1214 19:16:00.975057    3861 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1214 19:16:00.975151    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:16:00.977075    3861 cli_runner.go:115] Run: docker container inspect functional-20211214191315-1964 --format={{.State.Status}}
	I1214 19:16:00.978087    3861 node_ready.go:35] waiting up to 6m0s for node "functional-20211214191315-1964" to be "Ready" ...
	I1214 19:16:01.103777    3861 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1214 19:16:01.103763    3861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52227 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/functional-20211214191315-1964/id_rsa Username:docker}
	I1214 19:16:01.103785    3861 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1214 19:16:01.103865    3861 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20211214191315-1964
	I1214 19:16:01.209054    3861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1214 19:16:01.229911    3861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52227 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/functional-20211214191315-1964/id_rsa Username:docker}
	I1214 19:16:01.327497    3861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1214 19:16:01.825801    3861 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I1214 19:16:01.825819    3861 addons.go:417] enableAddons completed in 1.064215937s
	I1214 19:16:02.986728    3861 node_ready.go:58] node "functional-20211214191315-1964" has status "Ready":"False"
	I1214 19:16:13.520274    3861 node_ready.go:53] error getting node "functional-20211214191315-1964": Get "https://127.0.0.1:52226/api/v1/nodes/functional-20211214191315-1964": EOF
	I1214 19:16:13.520283    3861 node_ready.go:38] duration metric: took 12.538938152s waiting for node "functional-20211214191315-1964" to be "Ready" ...
	I1214 19:16:13.546098    3861 out.go:176] 
	W1214 19:16:13.546259    3861 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-20211214191315-1964": Get "https://127.0.0.1:52226/api/v1/nodes/functional-20211214191315-1964": EOF
	W1214 19:16:13.546274    3861 out.go:241] * 
	W1214 19:16:13.547407    3861 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2021-12-15 03:13:38 UTC, end at Wed 2021-12-15 03:16:26 UTC. --
	Dec 15 03:14:05 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:14:05.485563900Z" level=info msg="Docker daemon" commit=847da18 graphdriver(s)=overlay2 version=20.10.11
	Dec 15 03:14:05 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:14:05.485619057Z" level=info msg="Daemon has completed initialization"
	Dec 15 03:14:05 functional-20211214191315-1964 systemd[1]: Started Docker Application Container Engine.
	Dec 15 03:14:05 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:14:05.514652808Z" level=info msg="API listen on [::]:2376"
	Dec 15 03:14:05 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:14:05.517634997Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 15 03:14:38 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:14:38.343980683Z" level=info msg="ignoring event" container=4947489144f7c5ffdb9999a5c820ca584c1c7ac34c664a343a8d023c7988858c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:10 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:10.002435615Z" level=info msg="ignoring event" container=2aa4dddf31ca5c95bb11ef5d391587a62994eaa6673f5b882a3e891b31620d02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:52 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:52.992313976Z" level=info msg="ignoring event" container=d3053162ed40383fbfc2e2d73d796b128b99547df390c2b1cc1dd4d262364777 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:52 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:52.993670031Z" level=info msg="ignoring event" container=f64a119e0f9ec9f1700de287f1c76c4d34480b79c193a1362ab79a4fd837a807 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:52 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:52.993692031Z" level=info msg="ignoring event" container=f7375b96ea701aecbe693239dc7bdafad90a75e2b2ee093b8bfdd051b22dd67c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:53 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:53.076282620Z" level=info msg="ignoring event" container=7e979a288049ff2cb5f990315fd55220e32bd4d18e96a2fc54b653c0d5d6abe5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:53 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:53.076340093Z" level=info msg="ignoring event" container=9168974b805ca0bed1cc3aa576020a6e552b8d3136cb72d7b2943ba7f9cc6399 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:53 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:53.077314827Z" level=info msg="ignoring event" container=b51d2e575c77af287b3c808a5bfadff0dec83dd6eb521c962f660ab11f79a74b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:53 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:53.077377683Z" level=info msg="ignoring event" container=ed86ade71b65f89b7e56691b91945252e14f13a5bbd396140ef689d837a15322 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:53 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:53.080372813Z" level=info msg="ignoring event" container=7a8aaa4082423495b515321130d7b8e3d74cd9fd31c53150f47035f7a35bf64e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:53 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:53.092667993Z" level=info msg="ignoring event" container=cf796b7222f10a52265b4a98ebe7187cdcc086bdaef20996641cbf54a79a3a85 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:53 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:53.092697093Z" level=info msg="ignoring event" container=80134238c59ca704502f0fd924df88801bb39b532a8bf20ba02e926746b41180 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:53 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:53.092716016Z" level=info msg="ignoring event" container=04a83caa39cf4148989d264e7469d4fda6721330e50799c42644ffdff8850972 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:54 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:54.173220604Z" level=info msg="ignoring event" container=98cdd8174af90761a11855ba92e246cf2608cf1729505a3eeb340442e871ff3d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:54 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:54.271695360Z" level=info msg="ignoring event" container=cfc1a8507ed775e23077bce53467504e1b363d7ecebd2fd9b2ec450ea053ad64 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:15:57 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:15:57.920802475Z" level=info msg="ignoring event" container=e710431b012add6f7f0da320605934ec185993a9d21feaa44688e2e802f79263 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:16:02 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:16:02.017450721Z" level=info msg="ignoring event" container=17332ff1609831c6b4d66cd6330e22efebcaec7515cdcd5b0be71fc683ea664b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:16:03 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:16:03.054691935Z" level=info msg="ignoring event" container=64b91154d522d9d9e38bee0e5e33ce57bc292e4aafbe12e94fc9837d87c12660 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:16:03 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:16:03.115519622Z" level=info msg="ignoring event" container=d5bb4b81c6217c89f0e371c4d5ff1f606f0bde69c49a85db7f5156eac3609eb6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 15 03:16:03 functional-20211214191315-1964 dockerd[466]: time="2021-12-15T03:16:03.147142252Z" level=info msg="ignoring event" container=accfe53dabda18e036cfb0852026f12b75de12ee22b7913018a8924c68bff72d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	72c06de04b883       8a5cc299272d9       4 seconds ago        Running             kube-apiserver            2                   f9a74e91b429e
	64b91154d522d       8a5cc299272d9       24 seconds ago       Exited              kube-apiserver            1                   f9a74e91b429e
	4439a98b43e61       8d147537fb7d1       24 seconds ago       Running             coredns                   1                   e73c505cafa31
	ef1934ce8f8ac       6e38f40d628db       24 seconds ago       Running             storage-provisioner       2                   8f1305c3916db
	8c1c7474f72e2       0ce02f92d3e43       32 seconds ago       Running             kube-controller-manager   1                   4098fe804243a
	7c6987317e156       721ba97f54a65       32 seconds ago       Running             kube-scheduler            1                   6730186d71be2
	dcc8cf8c80d7f       0048118155842       32 seconds ago       Running             etcd                      1                   28efa3afb248b
	5d3ffdec1fc03       edeff87e48029       32 seconds ago       Running             kube-proxy                1                   f87f63e74d16e
	f64a119e0f9ec       6e38f40d628db       About a minute ago   Exited              storage-provisioner       1                   d3053162ed403
	e710431b012ad       8d147537fb7d1       About a minute ago   Exited              coredns                   0                   f7375b96ea701
	04a83caa39cf4       edeff87e48029       About a minute ago   Exited              kube-proxy                0                   b51d2e575c77a
	cf796b7222f10       0048118155842       2 minutes ago        Exited              etcd                      0                   7a8aaa4082423
	98cdd8174af90       721ba97f54a65       2 minutes ago        Exited              kube-scheduler            0                   7e979a288049f
	80134238c59ca       0ce02f92d3e43       2 minutes ago        Exited              kube-controller-manager   0                   9168974b805ca
	
	* 
	* ==> coredns [4439a98b43e6] <==
	* W1215 03:16:03.128120       1 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
	W1215 03:16:03.128209       1 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
	W1215 03:16:03.128269       1 reflector.go:436] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
	E1215 03:16:03.981876       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=529": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:04.268693       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=529": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:04.479070       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=529": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:05.834363       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=529": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:06.025058       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=529": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:06.561972       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=529": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:10.448457       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=529": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:10.684428       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=529": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:11.829825       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=529": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:19.113587       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=529": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:19.278780       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=529": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:21.223731       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.21.1/tools/cache/reflector.go:167: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=529": dial tcp 10.96.0.1:443: connect: connection refused
	.:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	
	* 
	* ==> coredns [e710431b012a] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-20211214191315-1964
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-20211214191315-1964
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1bfd93799ca1a0aa711376fa94919427c19ad092
	                    minikube.k8s.io/name=functional-20211214191315-1964
	                    minikube.k8s.io/updated_at=2021_12_14T19_14_26_0700
	                    minikube.k8s.io/version=v1.24.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 15 Dec 2021 03:14:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20211214191315-1964
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 15 Dec 2021 03:15:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 15 Dec 2021 03:15:59 +0000   Wed, 15 Dec 2021 03:14:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 15 Dec 2021 03:15:59 +0000   Wed, 15 Dec 2021 03:14:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 15 Dec 2021 03:15:59 +0000   Wed, 15 Dec 2021 03:14:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 15 Dec 2021 03:15:59 +0000   Wed, 15 Dec 2021 03:15:59 +0000   KubeletNotReady              PLEG is not healthy: pleg has yet to be successful
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20211214191315-1964
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6088600Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6088600Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc42efbef82a4767bb4c07ae962e4849
	  System UUID:                6a9894d5-bca9-448e-809c-b83e07de5e3f
	  Boot ID:                    ac973766-26ed-4bbd-9a65-162f7a72e07e
	  Kernel Version:             5.10.25-linuxkit
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.11
	  Kubelet Version:            v1.22.4
	  Kube-Proxy Version:         v1.22.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-78fcd69978-2p7ps                                  100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     109s
	  kube-system                 etcd-functional-20211214191315-1964                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         116s
	  kube-system                 kube-apiserver-functional-20211214191315-1964             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  kube-system                 kube-controller-manager-functional-20211214191315-1964    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 kube-proxy-gxjj9                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kube-scheduler-functional-20211214191315-1964             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (12%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 106s                   kube-proxy  
	  Normal  Starting                 27s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  2m14s (x3 over 2m14s)  kubelet     Node functional-20211214191315-1964 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m14s (x3 over 2m14s)  kubelet     Node functional-20211214191315-1964 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m14s (x3 over 2m14s)  kubelet     Node functional-20211214191315-1964 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m14s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m14s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 2m                     kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m                     kubelet     Node functional-20211214191315-1964 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                     kubelet     Node functional-20211214191315-1964 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m                     kubelet     Node functional-20211214191315-1964 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  119s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                110s                   kubelet     Node functional-20211214191315-1964 status is now: NodeReady
	  Normal  Starting                 27s                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s                    kubelet     Node functional-20211214191315-1964 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s                    kubelet     Node functional-20211214191315-1964 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s                    kubelet     Node functional-20211214191315-1964 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             27s                    kubelet     Node functional-20211214191315-1964 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  27s                    kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.039846] bpfilter: write fail -32
	[  +0.027248] bpfilter: write fail -32
	[  +0.028391] bpfilter: read fail 0
	[  +0.037122] bpfilter: read fail 0
	[  +0.028008] bpfilter: write fail -32
	[  +0.031987] bpfilter: read fail 0
	[  +0.043629] bpfilter: write fail -32
	[  +0.029548] bpfilter: read fail 0
	[  +0.026632] bpfilter: write fail -32
	[  +0.028901] bpfilter: read fail 0
	[  +0.028659] bpfilter: read fail 0
	[  +0.027035] bpfilter: read fail 0
	[  +0.042117] bpfilter: read fail 0
	[  +0.034861] bpfilter: read fail 0
	[  +0.028712] bpfilter: read fail 0
	[  +0.041722] bpfilter: read fail 0
	[  +0.032883] bpfilter: read fail 0
	[  +0.029512] bpfilter: read fail 0
	[  +0.031968] bpfilter: read fail 0
	[  +0.032934] bpfilter: read fail 0
	[  +0.029590] bpfilter: read fail 0
	[  +0.030730] bpfilter: read fail 0
	[  +0.027981] bpfilter: read fail 0
	[  +0.039550] bpfilter: read fail 0
	[  +0.029093] bpfilter: read fail 0
	
	* 
	* ==> etcd [cf796b7222f1] <==
	* {"level":"info","ts":"2021-12-15T03:14:20.247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2021-12-15T03:14:20.247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2021-12-15T03:14:20.247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2021-12-15T03:14:20.247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2021-12-15T03:14:20.247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2021-12-15T03:14:20.247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2021-12-15T03:14:20.247Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-15T03:14:20.248Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-15T03:14:20.248Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-15T03:14:20.248Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-15T03:14:20.248Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20211214191315-1964 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2021-12-15T03:14:20.248Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-15T03:14:20.249Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-12-15T03:14:20.249Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-12-15T03:14:20.248Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-15T03:14:20.249Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-12-15T03:14:20.249Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2021-12-15T03:15:52.876Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2021-12-15T03:15:52.876Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20211214191315-1964","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2021/12/15 03:15:52 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	WARNING: 2021/12/15 03:15:52 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2021-12-15T03:15:52.886Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2021-12-15T03:15:52.966Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-12-15T03:15:52.968Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-12-15T03:15:52.968Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20211214191315-1964","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> etcd [dcc8cf8c80d7] <==
	* {"level":"info","ts":"2021-12-15T03:15:54.889Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2021-12-15T03:15:54.890Z","caller":"etcdserver/server.go:834","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.0","cluster-id":"fa54960ea34d58be","cluster-version":"3.5"}
	{"level":"info","ts":"2021-12-15T03:15:54.891Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2021-12-15T03:15:54.891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2021-12-15T03:15:54.891Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2021-12-15T03:15:54.891Z","caller":"membership/cluster.go:523","msg":"updated cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","from":"3.5","to":"3.5"}
	{"level":"info","ts":"2021-12-15T03:15:54.894Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-12-15T03:15:54.894Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-12-15T03:15:54.894Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-12-15T03:15:54.894Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2021-12-15T03:15:54.894Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-12-15T03:15:55.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2021-12-15T03:15:55.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2021-12-15T03:15:55.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2021-12-15T03:15:55.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2021-12-15T03:15:55.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2021-12-15T03:15:55.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2021-12-15T03:15:55.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2021-12-15T03:15:55.589Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20211214191315-1964 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2021-12-15T03:15:55.589Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-15T03:15:55.590Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-12-15T03:15:55.590Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2021-12-15T03:15:55.591Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-12-15T03:15:55.594Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-12-15T03:15:55.594Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  03:16:27 up 10 min,  0 users,  load average: 1.74, 1.81, 1.12
	Linux functional-20211214191315-1964 5.10.25-linuxkit #1 SMP Tue Mar 23 09:27:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [64b91154d522] <==
	* I1215 03:16:03.036097       1 server.go:553] external host was not specified, using 192.168.49.2
	I1215 03:16:03.036579       1 server.go:161] Version: v1.22.4
	Error: failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use
	
	* 
	* ==> kube-apiserver [72c06de04b88] <==
	* I1215 03:16:23.944597       1 dynamic_serving_content.go:129] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1215 03:16:23.944824       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1215 03:16:23.944849       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	I1215 03:16:23.944862       1 controller.go:85] Starting OpenAPI controller
	I1215 03:16:23.944870       1 naming_controller.go:291] Starting NamingConditionController
	I1215 03:16:23.944877       1 establishing_controller.go:76] Starting EstablishingController
	I1215 03:16:23.944886       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1215 03:16:23.944892       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1215 03:16:23.944898       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1215 03:16:23.946136       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I1215 03:16:23.946160       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
	I1215 03:16:23.946473       1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1215 03:16:23.946523       1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E1215 03:16:23.948338       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1215 03:16:24.045064       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1215 03:16:24.058506       1 apf_controller.go:317] Running API Priority and Fairness config worker
	I1215 03:16:24.058749       1 cache.go:39] Caches are synced for autoregister controller
	I1215 03:16:24.058785       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I1215 03:16:24.058807       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1215 03:16:24.059097       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I1215 03:16:24.063019       1 controller.go:611] quota admission added evaluator for: endpoints
	I1215 03:16:24.069657       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I1215 03:16:24.944968       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1215 03:16:24.948799       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1215 03:16:24.952837       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	
	* 
	* ==> kube-controller-manager [80134238c59c] <==
	* I1215 03:14:36.876093       1 shared_informer.go:247] Caches are synced for PVC protection 
	I1215 03:14:36.880492       1 shared_informer.go:247] Caches are synced for persistent volume 
	I1215 03:14:36.910898       1 shared_informer.go:247] Caches are synced for stateful set 
	I1215 03:14:36.913435       1 shared_informer.go:247] Caches are synced for disruption 
	I1215 03:14:36.913461       1 disruption.go:371] Sending events to api server.
	I1215 03:14:36.918353       1 shared_informer.go:247] Caches are synced for taint 
	I1215 03:14:36.918396       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1215 03:14:36.918408       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	W1215 03:14:36.918463       1 node_lifecycle_controller.go:1013] Missing timestamp for Node functional-20211214191315-1964. Assuming now as a timestamp.
	I1215 03:14:36.918481       1 node_lifecycle_controller.go:1214] Controller detected that zone  is now in state Normal.
	I1215 03:14:36.918522       1 event.go:291] "Event occurred" object="functional-20211214191315-1964" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-20211214191315-1964 event: Registered Node functional-20211214191315-1964 in Controller"
	I1215 03:14:36.943593       1 shared_informer.go:247] Caches are synced for resource quota 
	I1215 03:14:36.960418       1 shared_informer.go:247] Caches are synced for daemon sets 
	I1215 03:14:36.963217       1 shared_informer.go:247] Caches are synced for resource quota 
	I1215 03:14:37.063224       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I1215 03:14:37.113606       1 shared_informer.go:247] Caches are synced for attach detach 
	I1215 03:14:37.316683       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2"
	I1215 03:14:37.484736       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1215 03:14:37.508173       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1215 03:14:37.508228       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1215 03:14:37.554526       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-78fcd69978 to 1"
	I1215 03:14:37.669605       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gxjj9"
	I1215 03:14:37.766673       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-28d69"
	I1215 03:14:37.770138       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-2p7ps"
	I1215 03:14:37.783176       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-78fcd69978-28d69"
	
	* 
	* ==> kube-controller-manager [8c1c7474f72e] <==
	* I1215 03:15:59.393651       1 endpointslicemirroring_controller.go:212] Starting EndpointSliceMirroring controller
	I1215 03:15:59.393655       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
	I1215 03:15:59.414246       1 controllermanager.go:577] Started "replicationcontroller"
	I1215 03:15:59.414292       1 replica_set.go:186] Starting replicationcontroller controller
	I1215 03:15:59.414300       1 shared_informer.go:240] Waiting for caches to sync for ReplicationController
	I1215 03:15:59.469414       1 controllermanager.go:577] Started "ephemeral-volume"
	I1215 03:15:59.469485       1 controller.go:170] Starting ephemeral volume controller
	I1215 03:15:59.469492       1 shared_informer.go:240] Waiting for caches to sync for ephemeral
	I1215 03:15:59.514954       1 controllermanager.go:577] Started "deployment"
	W1215 03:15:59.514987       1 core.go:245] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
	W1215 03:15:59.514991       1 controllermanager.go:569] Skipping "route"
	I1215 03:15:59.515055       1 deployment_controller.go:153] "Starting controller" controller="deployment"
	I1215 03:15:59.515062       1 shared_informer.go:240] Waiting for caches to sync for deployment
	I1215 03:15:59.567662       1 node_ipam_controller.go:91] Sending events to api server.
	W1215 03:16:09.569725       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	W1215 03:16:10.070297       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	W1215 03:16:11.070723       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	W1215 03:16:13.072240       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	E1215 03:16:13.072328       1 cidr_allocator.go:137] Failed to list all nodes: Get "https://192.168.49.2:8441/api/v1/nodes": failed to get token for kube-system/node-controller: timed out waiting for the condition
	W1215 03:16:19.563487       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	W1215 03:16:20.064202       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	W1215 03:16:21.065070       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	W1215 03:16:23.961000       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	E1215 03:16:23.961272       1 cidr_allocator.go:137] Failed to list all nodes: Get "https://192.168.49.2:8441/api/v1/nodes": failed to get token for kube-system/node-controller: timed out waiting for the condition
	E1215 03:16:23.961171       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ServiceAccount: unknown (get serviceaccounts)
	
	* 
	* ==> kube-proxy [04a83caa39cf] <==
	* I1215 03:14:38.478529       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I1215 03:14:38.478616       1 server_others.go:140] Detected node IP 192.168.49.2
	W1215 03:14:38.478629       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I1215 03:14:40.670471       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I1215 03:14:40.670534       1 server_others.go:212] Using iptables Proxier.
	I1215 03:14:40.670542       1 server_others.go:219] creating dualStackProxier for iptables.
	W1215 03:14:40.670554       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I1215 03:14:40.670925       1 server.go:649] Version: v1.22.4
	I1215 03:14:40.671441       1 config.go:315] Starting service config controller
	I1215 03:14:40.671679       1 config.go:224] Starting endpoint slice config controller
	I1215 03:14:40.671690       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1215 03:14:40.671795       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1215 03:14:40.772107       1 shared_informer.go:247] Caches are synced for service config 
	I1215 03:14:40.772108       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1215 03:14:55.385982       1 trace.go:205] Trace[1574108429]: "iptables restore" (15-Dec-2021 03:14:53.339) (total time: 2046ms):
	Trace[1574108429]: [2.046814362s] [2.046814362s] END
	I1215 03:15:05.078911       1 trace.go:205] Trace[861784797]: "iptables restore" (15-Dec-2021 03:15:02.815) (total time: 2262ms):
	Trace[861784797]: [2.262954534s] [2.262954534s] END
	I1215 03:15:27.560508       1 trace.go:205] Trace[2080217812]: "iptables restore" (15-Dec-2021 03:15:25.495) (total time: 2064ms):
	Trace[2080217812]: [2.064668229s] [2.064668229s] END
	
	* 
	* ==> kube-proxy [5d3ffdec1fc0] <==
	* E1215 03:15:54.787435       1 node.go:161] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20211214191315-1964": dial tcp 192.168.49.2:8441: connect: connection refused
	I1215 03:15:57.291183       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I1215 03:15:57.291203       1 server_others.go:140] Detected node IP 192.168.49.2
	W1215 03:15:57.291256       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	I1215 03:15:59.674391       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I1215 03:15:59.674438       1 server_others.go:212] Using iptables Proxier.
	I1215 03:15:59.674447       1 server_others.go:219] creating dualStackProxier for iptables.
	W1215 03:15:59.674455       1 server_others.go:495] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I1215 03:15:59.674910       1 server.go:649] Version: v1.22.4
	I1215 03:15:59.675386       1 config.go:224] Starting endpoint slice config controller
	I1215 03:15:59.675433       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1215 03:15:59.675486       1 config.go:315] Starting service config controller
	I1215 03:15:59.675528       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1215 03:15:59.776270       1 shared_informer.go:247] Caches are synced for service config 
	I1215 03:15:59.776319       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1215 03:16:08.509120       1 trace.go:205] Trace[1732331907]: "iptables restore" (15-Dec-2021 03:16:06.402) (total time: 2106ms):
	Trace[1732331907]: [2.106941004s] [2.106941004s] END
	I1215 03:16:17.917426       1 trace.go:205] Trace[12337458]: "iptables restore" (15-Dec-2021 03:16:15.446) (total time: 2471ms):
	Trace[12337458]: [2.471033854s] [2.471033854s] END
	
	* 
	* ==> kube-scheduler [7c6987317e15] <==
	* I1215 03:15:55.430561       1 serving.go:347] Generated self-signed cert in-memory
	W1215 03:15:57.210880       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1215 03:15:57.210915       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1215 03:15:57.210931       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1215 03:15:57.210935       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1215 03:15:57.277759       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I1215 03:15:57.277853       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1215 03:15:57.277921       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1215 03:15:57.278236       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E1215 03:15:57.287140       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1215 03:15:57.287176       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1215 03:15:57.287206       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1215 03:15:57.287226       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1215 03:15:57.287231       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1215 03:15:57.287242       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1215 03:15:57.287257       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E1215 03:15:57.288506       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I1215 03:15:57.378199       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1215 03:16:23.964614       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
	E1215 03:16:23.964722       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
	E1215 03:16:23.964785       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
	E1215 03:16:23.965270       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	E1215 03:16:23.975308       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	
	* 
	* ==> kube-scheduler [98cdd8174af9] <==
	* E1215 03:14:22.036186       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1215 03:14:22.036497       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1215 03:14:22.036752       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1215 03:14:22.036880       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1215 03:14:22.036888       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1215 03:14:22.037145       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1215 03:14:22.037373       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1215 03:14:22.037471       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1215 03:14:22.038776       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1215 03:14:22.040260       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1215 03:14:22.040387       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1215 03:14:22.040459       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1215 03:14:22.040523       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1215 03:14:22.040630       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1215 03:14:22.893616       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1215 03:14:22.959362       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1215 03:14:22.963661       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1215 03:14:23.001823       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1215 03:14:23.038634       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1215 03:14:23.063601       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1215 03:14:23.092531       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1215 03:14:23.434543       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I1215 03:15:52.893347       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1215 03:15:52.893758       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1215 03:15:52.893998       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-12-15 03:13:38 UTC, end at Wed 2021-12-15 03:16:28 UTC. --
	Dec 15 03:16:11 functional-20211214191315-1964 kubelet[6116]: I1215 03:16:11.082168    6116 status_manager.go:601] "Failed to get status for pod" podUID=1c4fe165-b3ab-48b7-80a8-09a029198ddc pod="kube-system/coredns-78fcd69978-2p7ps" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-78fcd69978-2p7ps\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:11 functional-20211214191315-1964 kubelet[6116]: I1215 03:16:11.082314    6116 status_manager.go:601] "Failed to get status for pod" podUID=3e79e46645c5ff06219b2411b43b9513 pod="kube-system/kube-apiserver-functional-20211214191315-1964" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-20211214191315-1964\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:11 functional-20211214191315-1964 kubelet[6116]: I1215 03:16:11.082439    6116 status_manager.go:601] "Failed to get status for pod" podUID=64059561543026c4018b52111bbdf496 pod="kube-system/kube-controller-manager-functional-20211214191315-1964" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-20211214191315-1964\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:11 functional-20211214191315-1964 kubelet[6116]: I1215 03:16:11.082562    6116 status_manager.go:601] "Failed to get status for pod" podUID=fb70bc64-478f-4146-9565-9aa0691bc521 pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:11 functional-20211214191315-1964 kubelet[6116]: I1215 03:16:11.127308    6116 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Dec 15 03:16:11 functional-20211214191315-1964 kubelet[6116]: I1215 03:16:11.128268    6116 status_manager.go:601] "Failed to get status for pod" podUID=1c4fe165-b3ab-48b7-80a8-09a029198ddc pod="kube-system/coredns-78fcd69978-2p7ps" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-78fcd69978-2p7ps\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:12 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:12.621896    6116 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20211214191315-1964?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Dec 15 03:16:15 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:15.372893    6116 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-functional-20211214191315-1964.16c0cf45d5fd9c36", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-functional-20211214191315-1964", UID:"3e79e46645c5ff06219b2411b43b9513", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"BackOff", Message:"B
ack-off restarting failed container", Source:v1.EventSource{Component:"kubelet", Host:"functional-20211214191315-1964"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc066755cccadde36, ext:4214666564, loc:(*time.Location)(0x77ab6e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc066755cccadde36, ext:4214666564, loc:(*time.Location)(0x77ab6e0)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events": dial tcp 192.168.49.2:8441: connect: connection refused'(may retry after sleeping)
	Dec 15 03:16:15 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:15.815998    6116 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20211214191315-1964?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Dec 15 03:16:19 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:19.592017    6116 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting node \"functional-20211214191315-1964\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20211214191315-1964?resourceVersion=0&timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:19 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:19.592321    6116 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting node \"functional-20211214191315-1964\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20211214191315-1964?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:19 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:19.592787    6116 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting node \"functional-20211214191315-1964\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20211214191315-1964?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:19 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:19.592997    6116 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting node \"functional-20211214191315-1964\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20211214191315-1964?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:19 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:19.593168    6116 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting node \"functional-20211214191315-1964\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20211214191315-1964?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:19 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:19.593208    6116 kubelet_node_status.go:457] "Unable to update node status" err="update node status exceeds retry count"
	Dec 15 03:16:21 functional-20211214191315-1964 kubelet[6116]: I1215 03:16:21.074811    6116 status_manager.go:601] "Failed to get status for pod" podUID=3e79e46645c5ff06219b2411b43b9513 pod="kube-system/kube-apiserver-functional-20211214191315-1964" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-20211214191315-1964\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:21 functional-20211214191315-1964 kubelet[6116]: I1215 03:16:21.075235    6116 status_manager.go:601] "Failed to get status for pod" podUID=64059561543026c4018b52111bbdf496 pod="kube-system/kube-controller-manager-functional-20211214191315-1964" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-20211214191315-1964\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:21 functional-20211214191315-1964 kubelet[6116]: I1215 03:16:21.075608    6116 status_manager.go:601] "Failed to get status for pod" podUID=fb70bc64-478f-4146-9565-9aa0691bc521 pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:21 functional-20211214191315-1964 kubelet[6116]: I1215 03:16:21.075950    6116 status_manager.go:601] "Failed to get status for pod" podUID=7e1fca3ceff1ea1bbb21731965864899 pod="kube-system/etcd-functional-20211214191315-1964" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-20211214191315-1964\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:21 functional-20211214191315-1964 kubelet[6116]: I1215 03:16:21.076164    6116 status_manager.go:601] "Failed to get status for pod" podUID=158b4d3fe2f0b69d80a4c203dc10174b pod="kube-system/kube-scheduler-functional-20211214191315-1964" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20211214191315-1964\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:21 functional-20211214191315-1964 kubelet[6116]: I1215 03:16:21.076366    6116 status_manager.go:601] "Failed to get status for pod" podUID=1c4fe165-b3ab-48b7-80a8-09a029198ddc pod="kube-system/coredns-78fcd69978-2p7ps" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-78fcd69978-2p7ps\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Dec 15 03:16:22 functional-20211214191315-1964 kubelet[6116]: I1215 03:16:22.073590    6116 scope.go:110] "RemoveContainer" containerID="64b91154d522d9d9e38bee0e5e33ce57bc292e4aafbe12e94fc9837d87c12660"
	Dec 15 03:16:22 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:22.216736    6116 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20211214191315-1964?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Dec 15 03:16:23 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:23.959637    6116 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Dec 15 03:16:23 functional-20211214191315-1964 kubelet[6116]: E1215 03:16:23.959631    6116 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	
	* 
	* ==> storage-provisioner [ef1934ce8f8a] <==
	* I1215 03:16:02.181752       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1215 03:16:02.194896       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1215 03:16:02.194923       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E1215 03:16:05.655127       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:09.913376       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:13.509098       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:16.553607       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E1215 03:16:19.573584       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	I1215 03:16:24.065008       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1215 03:16:24.065431       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cc75cbeb-f0ab-4a0a-ad48-bee87e673b9d", APIVersion:"v1", ResourceVersion:"570", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20211214191315-1964_c24c6cb1-1258-4105-bacb-a5403c0e8eda became leader
	I1215 03:16:24.065591       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20211214191315-1964_c24c6cb1-1258-4105-bacb-a5403c0e8eda!
	I1215 03:16:24.166246       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20211214191315-1964_c24c6cb1-1258-4105-bacb-a5403c0e8eda!
	
	* 
	* ==> storage-provisioner [f64a119e0f9e] <==
	* I1215 03:15:10.975949       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1215 03:15:10.984957       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1215 03:15:10.985004       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1215 03:15:10.997867       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1215 03:15:10.997992       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20211214191315-1964_1fec940e-cd00-4f60-b45c-53c04bc9b5a3!
	I1215 03:15:10.997922       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cc75cbeb-f0ab-4a0a-ad48-bee87e673b9d", APIVersion:"v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20211214191315-1964_1fec940e-cd00-4f60-b45c-53c04bc9b5a3 became leader
	I1215 03:15:11.098810       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20211214191315-1964_1fec940e-cd00-4f60-b45c-53c04bc9b5a3!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-20211214191315-1964 -n functional-20211214191315-1964
helpers_test.go:262: (dbg) Run:  kubectl --context functional-20211214191315-1964 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestFunctional/serial/ComponentHealth]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context functional-20211214191315-1964 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context functional-20211214191315-1964 describe pod : exit status 1 (41.945126ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context functional-20211214191315-1964 describe pod : exit status 1
--- FAIL: TestFunctional/serial/ComponentHealth (11.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (562.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-20211214195818-1964 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 
E1214 20:16:45.981559    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
E1214 20:16:51.193144    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/skaffold-20211214195511-1964/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p calico-20211214195818-1964 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : exit status 80 (9m22.312047858s)

                                                
                                                
-- stdout --
	* [calico-20211214195818-1964] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13173
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node calico-20211214195818-1964 in cluster calico-20211214195818-1964
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.22.4 on Docker 20.10.11 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1214 20:15:49.954162   18209 out.go:297] Setting OutFile to fd 1 ...
	I1214 20:15:49.954305   18209 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1214 20:15:49.954310   18209 out.go:310] Setting ErrFile to fd 2...
	I1214 20:15:49.954313   18209 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1214 20:15:49.954393   18209 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/bin
	I1214 20:15:49.954738   18209 out.go:304] Setting JSON to false
	I1214 20:15:49.980890   18209 start.go:112] hostinfo: {"hostname":"37309.local","uptime":4525,"bootTime":1639537224,"procs":322,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1214 20:15:49.980998   18209 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1214 20:15:50.008080   18209 out.go:176] * [calico-20211214195818-1964] minikube v1.24.0 on Darwin 11.2.3
	I1214 20:15:50.008147   18209 notify.go:174] Checking for updates...
	I1214 20:15:50.054662   18209 out.go:176]   - MINIKUBE_LOCATION=13173
	I1214 20:15:50.080475   18209 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig
	I1214 20:15:50.106628   18209 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1214 20:15:50.132696   18209 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube
	I1214 20:15:50.133213   18209 config.go:176] Loaded profile config "cilium-20211214195818-1964": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4
	I1214 20:15:50.133258   18209 driver.go:344] Setting default libvirt URI to qemu:///system
	I1214 20:15:50.232695   18209 docker.go:132] docker version: linux-20.10.6
	I1214 20:15:50.232844   18209 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1214 20:15:50.443278   18209 info.go:263] docker info: {ID:5AO3:Q7BV:QPO2:IORE:2FWE:BSI4:OSEF:34WA:NLU4:XM3Q:JID7:HR3K Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:51 SystemTime:2021-12-15 04:15:50.354471244 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1214 20:15:50.524159   18209 out.go:176] * Using the docker driver based on user configuration
	I1214 20:15:50.524185   18209 start.go:280] selected driver: docker
	I1214 20:15:50.524190   18209 start.go:795] validating driver "docker" against <nil>
	I1214 20:15:50.524219   18209 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1214 20:15:50.526835   18209 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1214 20:15:50.709058   18209 info.go:263] docker info: {ID:5AO3:Q7BV:QPO2:IORE:2FWE:BSI4:OSEF:34WA:NLU4:XM3Q:JID7:HR3K Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:51 SystemTime:2021-12-15 04:15:50.641605685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1214 20:15:50.709183   18209 start_flags.go:284] no existing cluster config was found, will generate one from the flags 
	I1214 20:15:50.709314   18209 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1214 20:15:50.709330   18209 cni.go:93] Creating CNI manager for "calico"
	I1214 20:15:50.709337   18209 start_flags.go:293] Found "Calico" CNI - setting NetworkPlugin=cni
	I1214 20:15:50.709351   18209 start_flags.go:298] config:
	{Name:calico-20211214195818-1964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4 ClusterName:calico-20211214195818-1964 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1214 20:15:50.756598   18209 out.go:176] * Starting control plane node calico-20211214195818-1964 in cluster calico-20211214195818-1964
	I1214 20:15:50.756684   18209 cache.go:118] Beginning downloading kic base image for docker with docker
	I1214 20:15:50.782810   18209 out.go:176] * Pulling base image ...
	I1214 20:15:50.782868   18209 preload.go:132] Checking if preload exists for k8s version v1.22.4 and runtime docker
	I1214 20:15:50.782956   18209 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab in local docker daemon
	I1214 20:15:50.782954   18209 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.22.4-docker-overlay2-amd64.tar.lz4
	I1214 20:15:50.782999   18209 cache.go:57] Caching tarball of preloaded images
	I1214 20:15:50.783134   18209 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.22.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1214 20:15:50.783149   18209 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.4 on docker
	I1214 20:15:50.783774   18209 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/config.json ...
	I1214 20:15:50.783913   18209 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/config.json: {Name:mk30d2e6c1805c4eed324d6a3969f5bfd2a70928 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1214 20:15:50.898698   18209 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab in local docker daemon, skipping pull
	I1214 20:15:50.898731   18209 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab exists in daemon, skipping load
	I1214 20:15:50.898743   18209 cache.go:206] Successfully downloaded all kic artifacts
	I1214 20:15:50.898795   18209 start.go:313] acquiring machines lock for calico-20211214195818-1964: {Name:mk79a090e378f0a633439149a8504697b29f7ce8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1214 20:15:50.898931   18209 start.go:317] acquired machines lock for "calico-20211214195818-1964" in 124.289µs
	I1214 20:15:50.898967   18209 start.go:89] Provisioning new machine with config: &{Name:calico-20211214195818-1964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4 ClusterName:calico-20211214195818-1964 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.4 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker} &{Name: IP: Port:8443 KubernetesVersion:v1.22.4 ControlPlane:true Worker:true}
	I1214 20:15:50.899019   18209 start.go:126] createHost starting for "" (driver="docker")
	I1214 20:15:50.925627   18209 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1214 20:15:50.925800   18209 start.go:160] libmachine.API.Create for "calico-20211214195818-1964" (driver="docker")
	I1214 20:15:50.925826   18209 client.go:168] LocalClient.Create starting
	I1214 20:15:50.925921   18209 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca.pem
	I1214 20:15:50.925959   18209 main.go:130] libmachine: Decoding PEM data...
	I1214 20:15:50.925976   18209 main.go:130] libmachine: Parsing certificate...
	I1214 20:15:50.926040   18209 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/cert.pem
	I1214 20:15:50.946434   18209 main.go:130] libmachine: Decoding PEM data...
	I1214 20:15:50.946502   18209 main.go:130] libmachine: Parsing certificate...
	I1214 20:15:50.947041   18209 cli_runner.go:115] Run: docker network inspect calico-20211214195818-1964 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1214 20:15:51.062651   18209 cli_runner.go:162] docker network inspect calico-20211214195818-1964 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1214 20:15:51.062770   18209 network_create.go:254] running [docker network inspect calico-20211214195818-1964] to gather additional debugging logs...
	I1214 20:15:51.062785   18209 cli_runner.go:115] Run: docker network inspect calico-20211214195818-1964
	W1214 20:15:51.177006   18209 cli_runner.go:162] docker network inspect calico-20211214195818-1964 returned with exit code 1
	I1214 20:15:51.177030   18209 network_create.go:257] error running [docker network inspect calico-20211214195818-1964]: docker network inspect calico-20211214195818-1964: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20211214195818-1964
	I1214 20:15:51.177043   18209 network_create.go:259] output of [docker network inspect calico-20211214195818-1964]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20211214195818-1964
	
	** /stderr **
	I1214 20:15:51.177151   18209 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1214 20:15:51.291149   18209 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005b2120] misses:0}
	I1214 20:15:51.291207   18209 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1214 20:15:51.291229   18209 network_create.go:106] attempt to create docker network calico-20211214195818-1964 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1214 20:15:51.291322   18209 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20211214195818-1964
	W1214 20:15:51.465554   18209 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20211214195818-1964 returned with exit code 1
	W1214 20:15:51.465595   18209 network_create.go:98] failed to create docker network calico-20211214195818-1964 192.168.49.0/24, will retry: subnet is taken
	I1214 20:15:51.465807   18209 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005b2120] amended:false}} dirty:map[] misses:0}
	I1214 20:15:51.465824   18209 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1214 20:15:51.465992   18209 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0005b2120] amended:true}} dirty:map[192.168.49.0:0xc0005b2120 192.168.58.0:0xc0003903d0] misses:0}
	I1214 20:15:51.466004   18209 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1214 20:15:51.466010   18209 network_create.go:106] attempt to create docker network calico-20211214195818-1964 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1214 20:15:51.466088   18209 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20211214195818-1964
	I1214 20:15:57.437054   18209 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20211214195818-1964: (5.970960998s)
	I1214 20:15:57.437075   18209 network_create.go:90] docker network calico-20211214195818-1964 192.168.58.0/24 created
	I1214 20:15:57.437095   18209 kic.go:106] calculated static IP "192.168.58.2" for the "calico-20211214195818-1964" container
	I1214 20:15:57.437206   18209 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1214 20:15:57.551697   18209 cli_runner.go:115] Run: docker volume create calico-20211214195818-1964 --label name.minikube.sigs.k8s.io=calico-20211214195818-1964 --label created_by.minikube.sigs.k8s.io=true
	I1214 20:15:57.667936   18209 oci.go:102] Successfully created a docker volume calico-20211214195818-1964
	I1214 20:15:57.668095   18209 cli_runner.go:115] Run: docker run --rm --name calico-20211214195818-1964-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20211214195818-1964 --entrypoint /usr/bin/test -v calico-20211214195818-1964:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab -d /var/lib
	I1214 20:15:58.189044   18209 oci.go:106] Successfully prepared a docker volume calico-20211214195818-1964
	I1214 20:15:58.189084   18209 preload.go:132] Checking if preload exists for k8s version v1.22.4 and runtime docker
	I1214 20:15:58.189104   18209 kic.go:179] Starting extracting preloaded images to volume ...
	I1214 20:15:58.189204   18209 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.22.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20211214195818-1964:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab -I lz4 -xf /preloaded.tar -C /extractDir
	I1214 20:16:03.605548   18209 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.22.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20211214195818-1964:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab -I lz4 -xf /preloaded.tar -C /extractDir: (5.416330644s)
	I1214 20:16:03.605568   18209 kic.go:188] duration metric: took 5.416510 seconds to extract preloaded images to volume
	I1214 20:16:03.605681   18209 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1214 20:16:03.809338   18209 cli_runner.go:115] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20211214195818-1964 --name calico-20211214195818-1964 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20211214195818-1964 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20211214195818-1964 --network calico-20211214195818-1964 --ip 192.168.58.2 --volume calico-20211214195818-1964:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab
	I1214 20:16:06.472434   18209 cli_runner.go:168] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20211214195818-1964 --name calico-20211214195818-1964 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20211214195818-1964 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20211214195818-1964 --network calico-20211214195818-1964 --ip 192.168.58.2 --volume calico-20211214195818-1964:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab: (2.663033608s)
	I1214 20:16:06.472562   18209 cli_runner.go:115] Run: docker container inspect calico-20211214195818-1964 --format={{.State.Running}}
	I1214 20:16:06.604631   18209 cli_runner.go:115] Run: docker container inspect calico-20211214195818-1964 --format={{.State.Status}}
	I1214 20:16:06.740731   18209 cli_runner.go:115] Run: docker exec calico-20211214195818-1964 stat /var/lib/dpkg/alternatives/iptables
	I1214 20:16:06.927166   18209 oci.go:281] the created container "calico-20211214195818-1964" has a running status.
	I1214 20:16:06.927199   18209 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/calico-20211214195818-1964/id_rsa...
	I1214 20:16:07.229779   18209 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/calico-20211214195818-1964/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1214 20:16:07.415603   18209 cli_runner.go:115] Run: docker container inspect calico-20211214195818-1964 --format={{.State.Status}}
	I1214 20:16:07.542414   18209 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1214 20:16:07.542431   18209 kic_runner.go:114] Args: [docker exec --privileged calico-20211214195818-1964 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1214 20:16:07.721447   18209 cli_runner.go:115] Run: docker container inspect calico-20211214195818-1964 --format={{.State.Status}}
	I1214 20:16:07.846258   18209 machine.go:88] provisioning docker machine ...
	I1214 20:16:07.846307   18209 ubuntu.go:169] provisioning hostname "calico-20211214195818-1964"
	I1214 20:16:07.846442   18209 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211214195818-1964
	I1214 20:16:07.973405   18209 main.go:130] libmachine: Using SSH client type: native
	I1214 20:16:07.973635   18209 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397120] 0x139a200 <nil>  [] 0s} 127.0.0.1 63898 <nil> <nil>}
	I1214 20:16:07.973648   18209 main.go:130] libmachine: About to run SSH command:
	sudo hostname calico-20211214195818-1964 && echo "calico-20211214195818-1964" | sudo tee /etc/hostname
	I1214 20:16:08.106434   18209 main.go:130] libmachine: SSH cmd err, output: <nil>: calico-20211214195818-1964
	
	I1214 20:16:08.106541   18209 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211214195818-1964
	I1214 20:16:08.234511   18209 main.go:130] libmachine: Using SSH client type: native
	I1214 20:16:08.234709   18209 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397120] 0x139a200 <nil>  [] 0s} 127.0.0.1 63898 <nil> <nil>}
	I1214 20:16:08.234730   18209 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20211214195818-1964' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20211214195818-1964/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20211214195818-1964' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1214 20:16:08.363501   18209 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1214 20:16:08.363523   18209 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube}
	I1214 20:16:08.363547   18209 ubuntu.go:177] setting up certificates
	I1214 20:16:08.363557   18209 provision.go:83] configureAuth start
	I1214 20:16:08.363640   18209 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20211214195818-1964
	I1214 20:16:08.485858   18209 provision.go:138] copyHostCerts
	I1214 20:16:08.485977   18209 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.pem, removing ...
	I1214 20:16:08.486017   18209 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.pem
	I1214 20:16:08.486125   18209 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.pem (1078 bytes)
	I1214 20:16:08.486324   18209 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cert.pem, removing ...
	I1214 20:16:08.486338   18209 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cert.pem
	I1214 20:16:08.486402   18209 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cert.pem (1123 bytes)
	I1214 20:16:08.486560   18209 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/key.pem, removing ...
	I1214 20:16:08.486568   18209 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/key.pem
	I1214 20:16:08.486629   18209 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/key.pem (1675 bytes)
	I1214 20:16:08.486762   18209 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca-key.pem org=jenkins.calico-20211214195818-1964 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20211214195818-1964]
	I1214 20:16:08.623469   18209 provision.go:172] copyRemoteCerts
	I1214 20:16:08.623531   18209 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1214 20:16:08.623593   18209 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211214195818-1964
	I1214 20:16:08.737749   18209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63898 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/calico-20211214195818-1964/id_rsa Username:docker}
	I1214 20:16:08.828074   18209 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1214 20:16:08.846022   18209 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1214 20:16:08.864543   18209 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1214 20:16:08.884232   18209 provision.go:86] duration metric: configureAuth took 520.667794ms
	I1214 20:16:08.884246   18209 ubuntu.go:193] setting minikube options for container-runtime
	I1214 20:16:08.884384   18209 config.go:176] Loaded profile config "calico-20211214195818-1964": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4
	I1214 20:16:08.884466   18209 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211214195818-1964
	I1214 20:16:09.002809   18209 main.go:130] libmachine: Using SSH client type: native
	I1214 20:16:09.002958   18209 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397120] 0x139a200 <nil>  [] 0s} 127.0.0.1 63898 <nil> <nil>}
	I1214 20:16:09.002974   18209 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1214 20:16:09.125994   18209 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1214 20:16:09.126011   18209 ubuntu.go:71] root file system type: overlay
	I1214 20:16:09.126156   18209 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1214 20:16:09.126258   18209 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211214195818-1964
	I1214 20:16:09.247417   18209 main.go:130] libmachine: Using SSH client type: native
	I1214 20:16:09.247583   18209 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397120] 0x139a200 <nil>  [] 0s} 127.0.0.1 63898 <nil> <nil>}
	I1214 20:16:09.247638   18209 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1214 20:16:09.387210   18209 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1214 20:16:09.387366   18209 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211214195818-1964
	I1214 20:16:09.510688   18209 main.go:130] libmachine: Using SSH client type: native
	I1214 20:16:09.510845   18209 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397120] 0x139a200 <nil>  [] 0s} 127.0.0.1 63898 <nil> <nil>}
	I1214 20:16:09.510859   18209 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1214 20:16:38.832983   18209 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-11-18 00:35:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-12-15 04:16:09.404602097 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1214 20:16:38.833004   18209 machine.go:91] provisioned docker machine in 30.986933362s
	I1214 20:16:38.833010   18209 client.go:171] LocalClient.Create took 47.90751213s
	I1214 20:16:38.833032   18209 start.go:168] duration metric: libmachine.API.Create for "calico-20211214195818-1964" took 47.907562811s
	I1214 20:16:38.833048   18209 start.go:267] post-start starting for "calico-20211214195818-1964" (driver="docker")
	I1214 20:16:38.833055   18209 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1214 20:16:38.833150   18209 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1214 20:16:38.833224   18209 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211214195818-1964
	I1214 20:16:38.958024   18209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63898 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/calico-20211214195818-1964/id_rsa Username:docker}
	I1214 20:16:39.050013   18209 ssh_runner.go:195] Run: cat /etc/os-release
	I1214 20:16:39.053840   18209 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1214 20:16:39.053858   18209 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1214 20:16:39.053864   18209 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1214 20:16:39.053870   18209 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1214 20:16:39.053880   18209 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/addons for local assets ...
	I1214 20:16:39.053992   18209 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files for local assets ...
	I1214 20:16:39.054149   18209 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files/etc/ssl/certs/19642.pem -> 19642.pem in /etc/ssl/certs
	I1214 20:16:39.054334   18209 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1214 20:16:39.063537   18209 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files/etc/ssl/certs/19642.pem --> /etc/ssl/certs/19642.pem (1708 bytes)
	I1214 20:16:39.087976   18209 start.go:270] post-start completed in 254.920365ms
	I1214 20:16:39.088603   18209 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20211214195818-1964
	I1214 20:16:39.211492   18209 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/config.json ...
	I1214 20:16:39.211988   18209 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1214 20:16:39.212057   18209 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211214195818-1964
	I1214 20:16:39.339608   18209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63898 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/calico-20211214195818-1964/id_rsa Username:docker}
	I1214 20:16:39.427428   18209 start.go:129] duration metric: createHost completed in 48.528732354s
	I1214 20:16:39.427455   18209 start.go:80] releasing machines lock for "calico-20211214195818-1964", held for 48.528848912s
	I1214 20:16:39.427590   18209 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20211214195818-1964
	I1214 20:16:39.555012   18209 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1214 20:16:39.555019   18209 ssh_runner.go:195] Run: systemctl --version
	I1214 20:16:39.555103   18209 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211214195818-1964
	I1214 20:16:39.555112   18209 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211214195818-1964
	I1214 20:16:39.698823   18209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63898 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/calico-20211214195818-1964/id_rsa Username:docker}
	I1214 20:16:39.698827   18209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63898 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/calico-20211214195818-1964/id_rsa Username:docker}
	I1214 20:16:40.253157   18209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1214 20:16:40.263771   18209 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1214 20:16:40.276364   18209 cruntime.go:257] skipping containerd shutdown because we are bound to it
	I1214 20:16:40.276444   18209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1214 20:16:40.288512   18209 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1214 20:16:40.303385   18209 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1214 20:16:40.377402   18209 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1214 20:16:40.450636   18209 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1214 20:16:40.462240   18209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1214 20:16:40.524751   18209 ssh_runner.go:195] Run: sudo systemctl start docker
	I1214 20:16:40.535596   18209 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1214 20:16:40.583096   18209 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1214 20:16:40.668758   18209 out.go:203] * Preparing Kubernetes v1.22.4 on Docker 20.10.11 ...
	I1214 20:16:40.668863   18209 cli_runner.go:115] Run: docker exec -t calico-20211214195818-1964 dig +short host.docker.internal
	I1214 20:16:40.854668   18209 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1214 20:16:40.854784   18209 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1214 20:16:40.860600   18209 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1214 20:16:40.872449   18209 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20211214195818-1964
	I1214 20:16:40.999567   18209 preload.go:132] Checking if preload exists for k8s version v1.22.4 and runtime docker
	I1214 20:16:40.999701   18209 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1214 20:16:41.040408   18209 docker.go:558] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.22.4
	k8s.gcr.io/kube-scheduler:v1.22.4
	k8s.gcr.io/kube-controller-manager:v1.22.4
	k8s.gcr.io/kube-proxy:v1.22.4
	kubernetesui/dashboard:v2.3.1
	k8s.gcr.io/etcd:3.5.0-0
	kubernetesui/metrics-scraper:v1.0.7
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	
	-- /stdout --
	I1214 20:16:41.040430   18209 docker.go:489] Images already preloaded, skipping extraction
	I1214 20:16:41.040573   18209 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1214 20:16:41.078299   18209 docker.go:558] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.22.4
	k8s.gcr.io/kube-scheduler:v1.22.4
	k8s.gcr.io/kube-controller-manager:v1.22.4
	k8s.gcr.io/kube-proxy:v1.22.4
	kubernetesui/dashboard:v2.3.1
	k8s.gcr.io/etcd:3.5.0-0
	kubernetesui/metrics-scraper:v1.0.7
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	
	-- /stdout --
	I1214 20:16:41.078319   18209 cache_images.go:79] Images are preloaded, skipping loading
	I1214 20:16:41.078421   18209 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1214 20:16:41.183280   18209 cni.go:93] Creating CNI manager for "calico"
	I1214 20:16:41.183306   18209 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1214 20:16:41.183320   18209 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.22.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20211214195818-1964 NodeName:calico-20211214195818-1964 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1214 20:16:41.183434   18209 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "calico-20211214195818-1964"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1214 20:16:41.183529   18209 kubeadm.go:927] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20211214195818-1964 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.4 ClusterName:calico-20211214195818-1964 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I1214 20:16:41.183612   18209 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.22.4
	I1214 20:16:41.194433   18209 binaries.go:44] Found k8s binaries, skipping transfer
	I1214 20:16:41.194533   18209 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1214 20:16:41.207354   18209 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1214 20:16:41.225072   18209 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1214 20:16:41.240487   18209 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I1214 20:16:41.255650   18209 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1214 20:16:41.262146   18209 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1214 20:16:41.278743   18209 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964 for IP: 192.168.58.2
	I1214 20:16:41.278899   18209 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.key
	I1214 20:16:41.278966   18209 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/proxy-client-ca.key
	I1214 20:16:41.279030   18209 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/client.key
	I1214 20:16:41.279046   18209 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/client.crt with IP's: []
	I1214 20:16:41.383449   18209 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/client.crt ...
	I1214 20:16:41.383465   18209 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/client.crt: {Name:mk79a82ca6f9f3c6aa383bf413d33ca0e8e94a96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1214 20:16:41.384040   18209 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/client.key ...
	I1214 20:16:41.384051   18209 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/client.key: {Name:mkb4eeb42566caa2c07695282fa4c726369861ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1214 20:16:41.384261   18209 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/apiserver.key.cee25041
	I1214 20:16:41.384281   18209 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1214 20:16:41.463694   18209 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/apiserver.crt.cee25041 ...
	I1214 20:16:41.463711   18209 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/apiserver.crt.cee25041: {Name:mk325e162f1babb1a8a9b2a48fb6f247ac28daad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1214 20:16:41.463998   18209 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/apiserver.key.cee25041 ...
	I1214 20:16:41.464015   18209 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/apiserver.key.cee25041: {Name:mk65f356346ca88cab52017a4017278ae8aa3fe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1214 20:16:41.464202   18209 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/apiserver.crt
	I1214 20:16:41.464373   18209 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/apiserver.key
	I1214 20:16:41.464529   18209 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/proxy-client.key
	I1214 20:16:41.464547   18209 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/proxy-client.crt with IP's: []
	I1214 20:16:41.636394   18209 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/proxy-client.crt ...
	I1214 20:16:41.636416   18209 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/proxy-client.crt: {Name:mk1b807affd6ae7e42c6928c8951343be7d6e06c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1214 20:16:41.636723   18209 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/proxy-client.key ...
	I1214 20:16:41.636734   18209 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/proxy-client.key: {Name:mk7bd2c367491e93f8851003cb6c7d6a1e366d3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1214 20:16:41.637130   18209 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/1964.pem (1338 bytes)
	W1214 20:16:41.637177   18209 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/1964_empty.pem, impossibly tiny 0 bytes
	I1214 20:16:41.637188   18209 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca-key.pem (1675 bytes)
	I1214 20:16:41.637226   18209 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca.pem (1078 bytes)
	I1214 20:16:41.637260   18209 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/cert.pem (1123 bytes)
	I1214 20:16:41.637298   18209 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/key.pem (1675 bytes)
	I1214 20:16:41.637371   18209 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files/etc/ssl/certs/19642.pem (1708 bytes)
	I1214 20:16:41.638145   18209 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1214 20:16:41.661692   18209 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1214 20:16:41.681717   18209 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1214 20:16:41.707476   18209 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/calico-20211214195818-1964/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1214 20:16:41.733307   18209 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1214 20:16:41.769253   18209 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1214 20:16:41.787406   18209 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1214 20:16:41.812642   18209 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1214 20:16:41.837281   18209 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/1964.pem --> /usr/share/ca-certificates/1964.pem (1338 bytes)
	I1214 20:16:41.859598   18209 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files/etc/ssl/certs/19642.pem --> /usr/share/ca-certificates/19642.pem (1708 bytes)
	I1214 20:16:41.882341   18209 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1214 20:16:41.906928   18209 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1214 20:16:41.926410   18209 ssh_runner.go:195] Run: openssl version
	I1214 20:16:41.932448   18209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1964.pem && ln -fs /usr/share/ca-certificates/1964.pem /etc/ssl/certs/1964.pem"
	I1214 20:16:41.941976   18209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1964.pem
	I1214 20:16:41.947186   18209 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 15 03:13 /usr/share/ca-certificates/1964.pem
	I1214 20:16:41.947300   18209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1964.pem
	I1214 20:16:41.954127   18209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1964.pem /etc/ssl/certs/51391683.0"
	I1214 20:16:41.963378   18209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19642.pem && ln -fs /usr/share/ca-certificates/19642.pem /etc/ssl/certs/19642.pem"
	I1214 20:16:41.973425   18209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19642.pem
	I1214 20:16:41.984136   18209 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 15 03:13 /usr/share/ca-certificates/19642.pem
	I1214 20:16:41.984205   18209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19642.pem
	I1214 20:16:41.990972   18209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19642.pem /etc/ssl/certs/3ec20f2e.0"
	I1214 20:16:42.000561   18209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1214 20:16:42.012483   18209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1214 20:16:42.018374   18209 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 15 03:07 /usr/share/ca-certificates/minikubeCA.pem
	I1214 20:16:42.018443   18209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1214 20:16:42.025208   18209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1214 20:16:42.034631   18209 kubeadm.go:390] StartCluster: {Name:calico-20211214195818-1964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4 ClusterName:calico-20211214195818-1964 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.22.4 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1214 20:16:42.034757   18209 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1214 20:16:42.073023   18209 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1214 20:16:42.081769   18209 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1214 20:16:42.090546   18209 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I1214 20:16:42.090622   18209 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1214 20:16:42.099419   18209 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1214 20:16:42.099445   18209 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1214 20:16:42.717283   18209 out.go:203]   - Generating certificates and keys ...
	I1214 20:16:45.892570   18209 out.go:203]   - Booting up control plane ...
	I1214 20:17:00.432954   18209 out.go:203]   - Configuring RBAC rules ...
	I1214 20:17:00.816583   18209 cni.go:93] Creating CNI manager for "calico"
	I1214 20:17:00.879650   18209 out.go:176] * Configuring Calico (Container Networking Interface) ...
	I1214 20:17:00.879859   18209 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.4/kubectl ...
	I1214 20:17:00.879870   18209 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
	I1214 20:17:00.904094   18209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1214 20:17:02.043866   18209 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.22.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.139759091s)
	I1214 20:17:02.043893   18209 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1214 20:17:02.043999   18209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:17:02.044002   18209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=1bfd93799ca1a0aa711376fa94919427c19ad092 minikube.k8s.io/name=calico-20211214195818-1964 minikube.k8s.io/updated_at=2021_12_14T20_17_02_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:17:02.057677   18209 ops.go:34] apiserver oom_adj: -16
	I1214 20:17:02.128358   18209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:17:02.684471   18209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:17:03.193135   18209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:17:03.683832   18209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:17:04.193012   18209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:17:04.683902   18209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:17:05.193126   18209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:17:05.683846   18209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:17:06.193085   18209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:17:06.685609   18209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:17:07.193028   18209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:17:07.683763   18209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:17:08.188937   18209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:17:08.685132   18209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:17:09.192708   18209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:17:09.684544   18209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:17:10.193006   18209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:17:10.685796   18209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:17:11.192998   18209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:17:11.259332   18209 kubeadm.go:1003] duration metric: took 9.215480076s to wait for elevateKubeSystemPrivileges.
	I1214 20:17:11.259347   18209 kubeadm.go:392] StartCluster complete in 29.224927821s
	I1214 20:17:11.259371   18209 settings.go:142] acquiring lock: {Name:mk93abdcbbc46dc3353c37938fd5d548af35ef3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1214 20:17:11.259456   18209 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig
	I1214 20:17:11.260267   18209 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig: {Name:mk605b877d3a6907cdf2ed75edbb40b36491c1e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1214 20:17:11.784801   18209 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20211214195818-1964" rescaled to 1
	I1214 20:17:11.784848   18209 start.go:207] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.22.4 ControlPlane:true Worker:true}
	I1214 20:17:11.784862   18209 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1214 20:17:11.784886   18209 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I1214 20:17:11.814219   18209 out.go:176] * Verifying Kubernetes components...
	I1214 20:17:11.785067   18209 config.go:176] Loaded profile config "calico-20211214195818-1964": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4
	I1214 20:17:11.814280   18209 addons.go:65] Setting default-storageclass=true in profile "calico-20211214195818-1964"
	I1214 20:17:11.814279   18209 addons.go:65] Setting storage-provisioner=true in profile "calico-20211214195818-1964"
	I1214 20:17:11.814290   18209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1214 20:17:11.814295   18209 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20211214195818-1964"
	I1214 20:17:11.814300   18209 addons.go:153] Setting addon storage-provisioner=true in "calico-20211214195818-1964"
	W1214 20:17:11.814306   18209 addons.go:165] addon storage-provisioner should already be in state true
	I1214 20:17:11.814341   18209 host.go:66] Checking if "calico-20211214195818-1964" exists ...
	I1214 20:17:11.814676   18209 cli_runner.go:115] Run: docker container inspect calico-20211214195818-1964 --format={{.State.Status}}
	I1214 20:17:11.814761   18209 cli_runner.go:115] Run: docker container inspect calico-20211214195818-1964 --format={{.State.Status}}
	I1214 20:17:11.854958   18209 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1214 20:17:11.855058   18209 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20211214195818-1964
	I1214 20:17:11.995908   18209 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1214 20:17:11.996081   18209 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1214 20:17:11.996091   18209 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1214 20:17:11.996165   18209 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211214195818-1964
	I1214 20:17:12.002465   18209 addons.go:153] Setting addon default-storageclass=true in "calico-20211214195818-1964"
	W1214 20:17:12.002490   18209 addons.go:165] addon default-storageclass should already be in state true
	I1214 20:17:12.002515   18209 host.go:66] Checking if "calico-20211214195818-1964" exists ...
	I1214 20:17:12.002962   18209 cli_runner.go:115] Run: docker container inspect calico-20211214195818-1964 --format={{.State.Status}}
	I1214 20:17:12.014663   18209 node_ready.go:35] waiting up to 5m0s for node "calico-20211214195818-1964" to be "Ready" ...
	I1214 20:17:12.019032   18209 node_ready.go:49] node "calico-20211214195818-1964" has status "Ready":"True"
	I1214 20:17:12.019045   18209 node_ready.go:38] duration metric: took 4.352263ms waiting for node "calico-20211214195818-1964" to be "Ready" ...
	I1214 20:17:12.019051   18209 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1214 20:17:12.031835   18209 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace to be "Ready" ...
	I1214 20:17:12.147562   18209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63898 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/calico-20211214195818-1964/id_rsa Username:docker}
	I1214 20:17:12.147594   18209 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1214 20:17:12.147606   18209 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1214 20:17:12.147706   18209 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20211214195818-1964
	I1214 20:17:12.277343   18209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63898 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/calico-20211214195818-1964/id_rsa Username:docker}
	I1214 20:17:12.290721   18209 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1214 20:17:12.407409   18209 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1214 20:17:13.005186   18209 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.150199061s)
	I1214 20:17:13.005205   18209 start.go:774] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I1214 20:17:13.160309   18209 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I1214 20:17:13.160341   18209 addons.go:417] enableAddons completed in 1.375480611s
	I1214 20:17:14.047834   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:17:16.049493   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:17:18.551724   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:17:21.049700   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:17:23.050480   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:17:25.552496   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:17:28.047000   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:17:30.047809   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:17:32.050779   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:17:34.550081   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:17:37.053580   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:17:39.547479   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:17:41.555264   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:17:44.047882   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:17:46.047980   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:17:48.048850   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:17:50.550042   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:17:53.049447   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:17:55.548447   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:17:58.048018   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:00.048289   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:02.547071   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:04.549214   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:07.047500   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:09.547539   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:11.550465   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:14.047947   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:16.547818   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:19.047630   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:21.048785   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:23.060348   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:25.548364   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:28.048576   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:30.051911   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:32.547008   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:34.547217   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:37.047685   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:39.048400   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:41.049279   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:43.055265   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:45.547887   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:47.549261   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:49.553587   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:52.049456   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:54.109553   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:56.552345   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:18:58.553563   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:01.048906   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:03.550149   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:06.051704   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:08.555328   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:11.052463   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:13.548449   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:15.552120   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:18.046732   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:20.051256   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:22.548113   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:24.552430   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:26.552719   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:28.553669   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:31.051063   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:33.547934   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:35.548827   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:38.048262   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:40.053316   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:42.563129   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:44.564305   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:47.047713   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:49.546545   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:51.551413   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:54.048920   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:56.051754   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:19:58.553812   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:00.554744   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:03.050902   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:05.547864   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:08.047754   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:10.546343   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:12.548372   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:14.550905   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:17.053307   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:19.546693   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:21.548609   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:24.045912   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:26.047891   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:28.549663   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:31.048472   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:33.048748   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:35.051650   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:37.051956   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:39.547886   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:42.046806   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:44.053868   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:46.546910   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:48.551095   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:51.046622   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:53.048528   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:55.050729   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:20:57.547479   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:00.049439   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:02.548138   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:04.550600   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:06.551427   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:09.046194   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:11.046599   18209 pod_ready.go:102] pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:12.053232   18209 pod_ready.go:81] duration metric: took 4m0.023041124s waiting for pod "calico-kube-controllers-58497c65d5-z9czg" in "kube-system" namespace to be "Ready" ...
	E1214 20:21:12.053243   18209 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I1214 20:21:12.053256   18209 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-79vvj" in "kube-system" namespace to be "Ready" ...
	I1214 20:21:14.067237   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:16.563840   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:18.564779   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:20.566865   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:23.065475   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:25.066274   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:27.565684   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:30.065496   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:32.563850   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:34.566701   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:36.569405   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:39.064447   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:41.065195   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:43.563984   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:45.569442   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:48.066836   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:50.569802   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:53.067727   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:55.565098   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:21:58.064435   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:00.072896   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:02.569946   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:05.066907   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:07.566076   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:10.069413   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:12.563990   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:15.068539   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:17.568709   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:20.071877   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:22.570674   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:25.067747   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:27.073052   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:29.563641   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:31.568204   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:34.065226   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:36.573935   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:39.064473   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:41.068340   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:43.563672   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:45.572118   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:48.063963   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:50.064420   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:52.066403   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:54.565026   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:56.565379   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:22:58.567913   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:01.065507   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:03.568845   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:06.063807   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:08.067947   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:10.073569   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:12.564575   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:14.570460   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:17.064749   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:19.068203   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:21.571570   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:24.064775   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:26.069750   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:28.563346   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:30.563855   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:33.070451   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:35.565377   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:37.567990   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:40.063589   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:42.067618   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:44.564154   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:47.067136   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:49.566698   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:51.573647   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:54.063733   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:56.066145   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:23:58.566789   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:24:01.118916   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:24:03.617802   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:24:05.621587   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:24:08.117209   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:24:10.123698   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:24:12.617875   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:24:14.620031   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:24:16.620990   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:24:19.120622   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:24:21.620735   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:24:24.125350   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:24:26.626333   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:24:29.120492   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:24:31.621188   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:24:33.622100   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:24:36.122858   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:24:38.619057   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:24:41.123337   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:24:43.619535   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:24:46.119198   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:24:48.619350   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:24:50.620629   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:24:53.118192   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:24:55.624055   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:24:58.121392   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:25:00.620018   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:25:02.620562   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:25:04.622739   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:25:07.119688   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:25:09.121576   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:25:11.625759   18209 pod_ready.go:102] pod "calico-node-79vvj" in "kube-system" namespace has status "Ready":"False"
	I1214 20:25:12.126079   18209 pod_ready.go:81] duration metric: took 4m0.018725139s waiting for pod "calico-node-79vvj" in "kube-system" namespace to be "Ready" ...
	E1214 20:25:12.126090   18209 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I1214 20:25:12.126104   18209 pod_ready.go:38] duration metric: took 8m0.054617093s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1214 20:25:12.151698   18209 out.go:176] 
	W1214 20:25:12.151773   18209 out.go:241] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W1214 20:25:12.151780   18209 out.go:241] * 
	* 
	W1214 20:25:12.152320   18209 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1214 20:25:12.240424   18209 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (562.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (318.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
E1214 20:21:00.867151    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
E1214 20:21:02.210685    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/false-20211214195818-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.162524961s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.118273002s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
E1214 20:21:43.170736    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/false-20211214195818-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.149723151s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1214 20:21:45.981628    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
E1214 20:21:51.190432    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/skaffold-20211214195511-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127381707s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
E1214 20:22:10.973064    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/auto-20211214195817-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.146550656s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.148602686s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1214 20:22:44.198158    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/cilium-20211214195818-1964/client.crt: no such file or directory
E1214 20:22:44.203549    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/cilium-20211214195818-1964/client.crt: no such file or directory
E1214 20:22:44.214396    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/cilium-20211214195818-1964/client.crt: no such file or directory
E1214 20:22:44.236923    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/cilium-20211214195818-1964/client.crt: no such file or directory
E1214 20:22:44.277698    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/cilium-20211214195818-1964/client.crt: no such file or directory
E1214 20:22:44.360785    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/cilium-20211214195818-1964/client.crt: no such file or directory
E1214 20:22:44.525246    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/cilium-20211214195818-1964/client.crt: no such file or directory
E1214 20:22:44.846480    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/cilium-20211214195818-1964/client.crt: no such file or directory
E1214 20:22:45.489406    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/cilium-20211214195818-1964/client.crt: no such file or directory
E1214 20:22:46.769745    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/cilium-20211214195818-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
E1214 20:22:49.331131    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/cilium-20211214195818-1964/client.crt: no such file or directory
E1214 20:22:54.456127    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/cilium-20211214195818-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.134896616s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1214 20:23:04.696877    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/cilium-20211214195818-1964/client.crt: no such file or directory
E1214 20:23:05.092656    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/false-20211214195818-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
E1214 20:23:25.177056    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/cilium-20211214195818-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131973631s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131322074s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1214 20:24:06.196472    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/cilium-20211214195818-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
E1214 20:24:26.096481    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/custom-weave-20211214195818-1964/client.crt: no such file or directory
E1214 20:24:26.101731    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/custom-weave-20211214195818-1964/client.crt: no such file or directory
E1214 20:24:26.112166    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/custom-weave-20211214195818-1964/client.crt: no such file or directory
E1214 20:24:26.135522    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/custom-weave-20211214195818-1964/client.crt: no such file or directory
E1214 20:24:26.175691    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/custom-weave-20211214195818-1964/client.crt: no such file or directory
E1214 20:24:26.256048    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/custom-weave-20211214195818-1964/client.crt: no such file or directory
E1214 20:24:26.420400    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/custom-weave-20211214195818-1964/client.crt: no such file or directory
E1214 20:24:26.840652    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/custom-weave-20211214195818-1964/client.crt: no such file or directory
E1214 20:24:27.133841    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/auto-20211214195817-1964/client.crt: no such file or directory
E1214 20:24:27.481300    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/custom-weave-20211214195818-1964/client.crt: no such file or directory
E1214 20:24:28.765258    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/custom-weave-20211214195818-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.1303204s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1214 20:24:31.329286    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/custom-weave-20211214195818-1964/client.crt: no such file or directory
E1214 20:24:36.449798    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/custom-weave-20211214195818-1964/client.crt: no such file or directory
E1214 20:24:37.806254    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
E1214 20:24:46.751953    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/custom-weave-20211214195818-1964/client.crt: no such file or directory
E1214 20:24:54.868073    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/auto-20211214195817-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
E1214 20:25:07.232383    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/custom-weave-20211214195818-1964/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.15204376s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1214 20:25:28.117091    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/cilium-20211214195818-1964/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.15464987s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (318.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (327.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-20211214195818-1964 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 
E1214 20:25:44.964153    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
E1214 20:25:48.193086    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/custom-weave-20211214195818-1964/client.crt: no such file or directory
E1214 20:25:48.987955    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/false-20211214195818-1964/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kindnet-20211214195818-1964 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : exit status 80 (5m27.367897664s)

                                                
                                                
-- stdout --
	* [kindnet-20211214195818-1964] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13173
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node kindnet-20211214195818-1964 in cluster kindnet-20211214195818-1964
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.22.4 on Docker 20.10.11 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1214 20:25:34.203855   19165 out.go:297] Setting OutFile to fd 1 ...
	I1214 20:25:34.203980   19165 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1214 20:25:34.203984   19165 out.go:310] Setting ErrFile to fd 2...
	I1214 20:25:34.203988   19165 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1214 20:25:34.204063   19165 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/bin
	I1214 20:25:34.204382   19165 out.go:304] Setting JSON to false
	I1214 20:25:34.228237   19165 start.go:112] hostinfo: {"hostname":"37309.local","uptime":5109,"bootTime":1639537225,"procs":322,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1214 20:25:34.228329   19165 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1214 20:25:34.255205   19165 out.go:176] * [kindnet-20211214195818-1964] minikube v1.24.0 on Darwin 11.2.3
	I1214 20:25:34.255303   19165 notify.go:174] Checking for updates...
	I1214 20:25:34.301965   19165 out.go:176]   - MINIKUBE_LOCATION=13173
	I1214 20:25:34.327776   19165 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig
	I1214 20:25:34.353936   19165 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1214 20:25:34.381039   19165 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube
	I1214 20:25:34.381517   19165 config.go:176] Loaded profile config "enable-default-cni-20211214195817-1964": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4
	I1214 20:25:34.381573   19165 driver.go:344] Setting default libvirt URI to qemu:///system
	I1214 20:25:34.480286   19165 docker.go:132] docker version: linux-20.10.6
	I1214 20:25:34.480426   19165 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1214 20:25:34.665044   19165 info.go:263] docker info: {ID:5AO3:Q7BV:QPO2:IORE:2FWE:BSI4:OSEF:34WA:NLU4:XM3Q:JID7:HR3K Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:51 SystemTime:2021-12-15 04:25:34.60888056 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAd
dress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secco
mp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1214 20:25:34.731171   19165 out.go:176] * Using the docker driver based on user configuration
	I1214 20:25:34.731225   19165 start.go:280] selected driver: docker
	I1214 20:25:34.731251   19165 start.go:795] validating driver "docker" against <nil>
	I1214 20:25:34.731287   19165 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1214 20:25:34.734562   19165 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1214 20:25:34.911572   19165 info.go:263] docker info: {ID:5AO3:Q7BV:QPO2:IORE:2FWE:BSI4:OSEF:34WA:NLU4:XM3Q:JID7:HR3K Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:51 SystemTime:2021-12-15 04:25:34.859497899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1214 20:25:34.911675   19165 start_flags.go:284] no existing cluster config was found, will generate one from the flags 
	I1214 20:25:34.911805   19165 start_flags.go:810] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1214 20:25:34.911820   19165 cni.go:93] Creating CNI manager for "kindnet"
	I1214 20:25:34.911836   19165 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1214 20:25:34.911841   19165 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I1214 20:25:34.911844   19165 start_flags.go:293] Found "CNI" CNI - setting NetworkPlugin=cni
	I1214 20:25:34.911856   19165 start_flags.go:298] config:
	{Name:kindnet-20211214195818-1964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4 ClusterName:kindnet-20211214195818-1964 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1214 20:25:34.960180   19165 out.go:176] * Starting control plane node kindnet-20211214195818-1964 in cluster kindnet-20211214195818-1964
	I1214 20:25:34.960269   19165 cache.go:118] Beginning downloading kic base image for docker with docker
	I1214 20:25:34.986451   19165 out.go:176] * Pulling base image ...
	I1214 20:25:34.986499   19165 preload.go:132] Checking if preload exists for k8s version v1.22.4 and runtime docker
	I1214 20:25:34.986542   19165 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab in local docker daemon
	I1214 20:25:34.986558   19165 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.22.4-docker-overlay2-amd64.tar.lz4
	I1214 20:25:34.986575   19165 cache.go:57] Caching tarball of preloaded images
	I1214 20:25:34.986717   19165 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.22.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1214 20:25:34.986732   19165 cache.go:60] Finished verifying existence of preloaded tar for  v1.22.4 on docker
	I1214 20:25:34.987437   19165 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/config.json ...
	I1214 20:25:34.987561   19165 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/config.json: {Name:mk4cac430f74c8f74aef7d44f8b7ea7066289fde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1214 20:25:35.103640   19165 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab in local docker daemon, skipping pull
	I1214 20:25:35.103664   19165 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab exists in daemon, skipping load
	I1214 20:25:35.103675   19165 cache.go:206] Successfully downloaded all kic artifacts
	I1214 20:25:35.103727   19165 start.go:313] acquiring machines lock for kindnet-20211214195818-1964: {Name:mkb0886250712c869a9cd500ab3aded738fe9a07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1214 20:25:35.103864   19165 start.go:317] acquired machines lock for "kindnet-20211214195818-1964" in 122.819µs
	I1214 20:25:35.103895   19165 start.go:89] Provisioning new machine with config: &{Name:kindnet-20211214195818-1964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4 ClusterName:kindnet-20211214195818-1964 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.4 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker} &{Name: IP: Port:8443
KubernetesVersion:v1.22.4 ControlPlane:true Worker:true}
	I1214 20:25:35.104003   19165 start.go:126] createHost starting for "" (driver="docker")
	I1214 20:25:35.152786   19165 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1214 20:25:35.153113   19165 start.go:160] libmachine.API.Create for "kindnet-20211214195818-1964" (driver="docker")
	I1214 20:25:35.153158   19165 client.go:168] LocalClient.Create starting
	I1214 20:25:35.153313   19165 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca.pem
	I1214 20:25:35.153400   19165 main.go:130] libmachine: Decoding PEM data...
	I1214 20:25:35.153433   19165 main.go:130] libmachine: Parsing certificate...
	I1214 20:25:35.153544   19165 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/cert.pem
	I1214 20:25:35.153598   19165 main.go:130] libmachine: Decoding PEM data...
	I1214 20:25:35.153617   19165 main.go:130] libmachine: Parsing certificate...
	I1214 20:25:35.154674   19165 cli_runner.go:115] Run: docker network inspect kindnet-20211214195818-1964 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1214 20:25:35.271199   19165 cli_runner.go:162] docker network inspect kindnet-20211214195818-1964 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1214 20:25:35.271319   19165 network_create.go:254] running [docker network inspect kindnet-20211214195818-1964] to gather additional debugging logs...
	I1214 20:25:35.271341   19165 cli_runner.go:115] Run: docker network inspect kindnet-20211214195818-1964
	W1214 20:25:35.386220   19165 cli_runner.go:162] docker network inspect kindnet-20211214195818-1964 returned with exit code 1
	I1214 20:25:35.386242   19165 network_create.go:257] error running [docker network inspect kindnet-20211214195818-1964]: docker network inspect kindnet-20211214195818-1964: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20211214195818-1964
	I1214 20:25:35.386261   19165 network_create.go:259] output of [docker network inspect kindnet-20211214195818-1964]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20211214195818-1964
	
	** /stderr **
	I1214 20:25:35.386345   19165 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1214 20:25:35.499965   19165 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0009c47c0] misses:0}
	I1214 20:25:35.500006   19165 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1214 20:25:35.500026   19165 network_create.go:106] attempt to create docker network kindnet-20211214195818-1964 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1214 20:25:35.500115   19165 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20211214195818-1964
	W1214 20:25:35.616156   19165 cli_runner.go:162] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20211214195818-1964 returned with exit code 1
	W1214 20:25:35.616210   19165 network_create.go:98] failed to create docker network kindnet-20211214195818-1964 192.168.49.0/24, will retry: subnet is taken
	I1214 20:25:35.616428   19165 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0009c47c0] amended:false}} dirty:map[] misses:0}
	I1214 20:25:35.616446   19165 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1214 20:25:35.616623   19165 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0009c47c0] amended:true}} dirty:map[192.168.49.0:0xc0009c47c0 192.168.58.0:0xc0006f2000] misses:0}
	I1214 20:25:35.616638   19165 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1214 20:25:35.616646   19165 network_create.go:106] attempt to create docker network kindnet-20211214195818-1964 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1214 20:25:35.616734   19165 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20211214195818-1964
	I1214 20:25:41.543530   19165 cli_runner.go:168] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20211214195818-1964: (5.926718581s)
	I1214 20:25:41.543550   19165 network_create.go:90] docker network kindnet-20211214195818-1964 192.168.58.0/24 created
	I1214 20:25:41.543564   19165 kic.go:106] calculated static IP "192.168.58.2" for the "kindnet-20211214195818-1964" container
	I1214 20:25:41.543685   19165 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I1214 20:25:41.660263   19165 cli_runner.go:115] Run: docker volume create kindnet-20211214195818-1964 --label name.minikube.sigs.k8s.io=kindnet-20211214195818-1964 --label created_by.minikube.sigs.k8s.io=true
	I1214 20:25:41.776062   19165 oci.go:102] Successfully created a docker volume kindnet-20211214195818-1964
	I1214 20:25:41.776215   19165 cli_runner.go:115] Run: docker run --rm --name kindnet-20211214195818-1964-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20211214195818-1964 --entrypoint /usr/bin/test -v kindnet-20211214195818-1964:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab -d /var/lib
	I1214 20:25:42.304140   19165 oci.go:106] Successfully prepared a docker volume kindnet-20211214195818-1964
	I1214 20:25:42.304178   19165 preload.go:132] Checking if preload exists for k8s version v1.22.4 and runtime docker
	I1214 20:25:42.304192   19165 kic.go:179] Starting extracting preloaded images to volume ...
	I1214 20:25:42.304295   19165 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.22.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20211214195818-1964:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab -I lz4 -xf /preloaded.tar -C /extractDir
	I1214 20:25:47.558313   19165 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.22.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20211214195818-1964:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab -I lz4 -xf /preloaded.tar -C /extractDir: (5.253900738s)
	I1214 20:25:47.558342   19165 kic.go:188] duration metric: took 5.254125 seconds to extract preloaded images to volume
	I1214 20:25:47.558478   19165 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1214 20:25:47.742778   19165 cli_runner.go:115] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20211214195818-1964 --name kindnet-20211214195818-1964 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20211214195818-1964 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20211214195818-1964 --network kindnet-20211214195818-1964 --ip 192.168.58.2 --volume kindnet-20211214195818-1964:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab
	I1214 20:25:58.876781   19165 cli_runner.go:168] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20211214195818-1964 --name kindnet-20211214195818-1964 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20211214195818-1964 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20211214195818-1964 --network kindnet-20211214195818-1964 --ip 192.168.58.2 --volume kindnet-20211214195818-1964:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab: (11.133884639s)
	I1214 20:25:58.876892   19165 cli_runner.go:115] Run: docker container inspect kindnet-20211214195818-1964 --format={{.State.Running}}
	I1214 20:25:58.995960   19165 cli_runner.go:115] Run: docker container inspect kindnet-20211214195818-1964 --format={{.State.Status}}
	I1214 20:25:59.116450   19165 cli_runner.go:115] Run: docker exec kindnet-20211214195818-1964 stat /var/lib/dpkg/alternatives/iptables
	I1214 20:25:59.299500   19165 oci.go:281] the created container "kindnet-20211214195818-1964" has a running status.
	I1214 20:25:59.299531   19165 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/kindnet-20211214195818-1964/id_rsa...
	I1214 20:25:59.349773   19165 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/kindnet-20211214195818-1964/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1214 20:25:59.520809   19165 cli_runner.go:115] Run: docker container inspect kindnet-20211214195818-1964 --format={{.State.Status}}
	I1214 20:25:59.637403   19165 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1214 20:25:59.637421   19165 kic_runner.go:114] Args: [docker exec --privileged kindnet-20211214195818-1964 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1214 20:25:59.817485   19165 cli_runner.go:115] Run: docker container inspect kindnet-20211214195818-1964 --format={{.State.Status}}
	I1214 20:25:59.936655   19165 machine.go:88] provisioning docker machine ...
	I1214 20:25:59.936695   19165 ubuntu.go:169] provisioning hostname "kindnet-20211214195818-1964"
	I1214 20:25:59.936794   19165 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211214195818-1964
	I1214 20:26:00.058417   19165 main.go:130] libmachine: Using SSH client type: native
	I1214 20:26:00.058628   19165 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397120] 0x139a200 <nil>  [] 0s} 127.0.0.1 49459 <nil> <nil>}
	I1214 20:26:00.058647   19165 main.go:130] libmachine: About to run SSH command:
	sudo hostname kindnet-20211214195818-1964 && echo "kindnet-20211214195818-1964" | sudo tee /etc/hostname
	I1214 20:26:00.059978   19165 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1214 20:26:03.191768   19165 main.go:130] libmachine: SSH cmd err, output: <nil>: kindnet-20211214195818-1964
	
	I1214 20:26:03.191869   19165 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211214195818-1964
	I1214 20:26:03.312396   19165 main.go:130] libmachine: Using SSH client type: native
	I1214 20:26:03.312552   19165 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397120] 0x139a200 <nil>  [] 0s} 127.0.0.1 49459 <nil> <nil>}
	I1214 20:26:03.312567   19165 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-20211214195818-1964' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-20211214195818-1964/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-20211214195818-1964' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1214 20:26:03.436290   19165 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I1214 20:26:03.436318   19165 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube}
	I1214 20:26:03.436336   19165 ubuntu.go:177] setting up certificates
	I1214 20:26:03.436342   19165 provision.go:83] configureAuth start
	I1214 20:26:03.436416   19165 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20211214195818-1964
	I1214 20:26:03.555863   19165 provision.go:138] copyHostCerts
	I1214 20:26:03.555955   19165 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.pem, removing ...
	I1214 20:26:03.555963   19165 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.pem
	I1214 20:26:03.556060   19165 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.pem (1078 bytes)
	I1214 20:26:03.556280   19165 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cert.pem, removing ...
	I1214 20:26:03.556292   19165 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cert.pem
	I1214 20:26:03.556349   19165 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cert.pem (1123 bytes)
	I1214 20:26:03.556498   19165 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/key.pem, removing ...
	I1214 20:26:03.556505   19165 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/key.pem
	I1214 20:26:03.556566   19165 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/key.pem (1675 bytes)
	I1214 20:26:03.556686   19165 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca-key.pem org=jenkins.kindnet-20211214195818-1964 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube kindnet-20211214195818-1964]
	I1214 20:26:03.643508   19165 provision.go:172] copyRemoteCerts
	I1214 20:26:03.643580   19165 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1214 20:26:03.643637   19165 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211214195818-1964
	I1214 20:26:03.765886   19165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49459 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/kindnet-20211214195818-1964/id_rsa Username:docker}
	I1214 20:26:03.856464   19165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1214 20:26:03.873613   19165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1214 20:26:03.892211   19165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1214 20:26:03.909357   19165 provision.go:86] duration metric: configureAuth took 473.001588ms
	I1214 20:26:03.909369   19165 ubuntu.go:193] setting minikube options for container-runtime
	I1214 20:26:03.909515   19165 config.go:176] Loaded profile config "kindnet-20211214195818-1964": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4
	I1214 20:26:03.909586   19165 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211214195818-1964
	I1214 20:26:04.029423   19165 main.go:130] libmachine: Using SSH client type: native
	I1214 20:26:04.029602   19165 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397120] 0x139a200 <nil>  [] 0s} 127.0.0.1 49459 <nil> <nil>}
	I1214 20:26:04.029613   19165 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1214 20:26:04.150664   19165 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1214 20:26:04.150680   19165 ubuntu.go:71] root file system type: overlay
	I1214 20:26:04.150900   19165 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1214 20:26:04.150999   19165 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211214195818-1964
	I1214 20:26:04.271801   19165 main.go:130] libmachine: Using SSH client type: native
	I1214 20:26:04.271948   19165 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397120] 0x139a200 <nil>  [] 0s} 127.0.0.1 49459 <nil> <nil>}
	I1214 20:26:04.271994   19165 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1214 20:26:04.401588   19165 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1214 20:26:04.401713   19165 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211214195818-1964
	I1214 20:26:04.518894   19165 main.go:130] libmachine: Using SSH client type: native
	I1214 20:26:04.519054   19165 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397120] 0x139a200 <nil>  [] 0s} 127.0.0.1 49459 <nil> <nil>}
	I1214 20:26:04.519066   19165 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1214 20:26:26.586141   19165 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-11-18 00:35:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-12-15 04:26:04.414553080 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1214 20:26:26.586196   19165 machine.go:91] provisioned docker machine in 26.649391323s
	I1214 20:26:26.586205   19165 client.go:171] LocalClient.Create took 51.432788943s
	I1214 20:26:26.586250   19165 start.go:168] duration metric: libmachine.API.Create for "kindnet-20211214195818-1964" took 51.432886995s
	I1214 20:26:26.586265   19165 start.go:267] post-start starting for "kindnet-20211214195818-1964" (driver="docker")
	I1214 20:26:26.586271   19165 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1214 20:26:26.586386   19165 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1214 20:26:26.586486   19165 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211214195818-1964
	I1214 20:26:26.714147   19165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49459 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/kindnet-20211214195818-1964/id_rsa Username:docker}
	I1214 20:26:26.802958   19165 ssh_runner.go:195] Run: cat /etc/os-release
	I1214 20:26:26.807090   19165 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1214 20:26:26.807112   19165 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1214 20:26:26.807118   19165 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1214 20:26:26.807125   19165 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I1214 20:26:26.807136   19165 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/addons for local assets ...
	I1214 20:26:26.807225   19165 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files for local assets ...
	I1214 20:26:26.807392   19165 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files/etc/ssl/certs/19642.pem -> 19642.pem in /etc/ssl/certs
	I1214 20:26:26.807570   19165 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1214 20:26:26.815214   19165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files/etc/ssl/certs/19642.pem --> /etc/ssl/certs/19642.pem (1708 bytes)
	I1214 20:26:26.835376   19165 start.go:270] post-start completed in 249.096963ms
	I1214 20:26:26.835902   19165 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20211214195818-1964
	I1214 20:26:26.963991   19165 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/config.json ...
	I1214 20:26:26.964410   19165 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1214 20:26:26.964472   19165 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211214195818-1964
	I1214 20:26:27.090106   19165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49459 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/kindnet-20211214195818-1964/id_rsa Username:docker}
	I1214 20:26:27.176920   19165 start.go:129] duration metric: createHost completed in 52.0726531s
	I1214 20:26:27.176937   19165 start.go:80] releasing machines lock for "kindnet-20211214195818-1964", held for 52.072812513s
	I1214 20:26:27.177041   19165 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20211214195818-1964
	I1214 20:26:27.376129   19165 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1214 20:26:27.376154   19165 ssh_runner.go:195] Run: systemctl --version
	I1214 20:26:27.376225   19165 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211214195818-1964
	I1214 20:26:27.376230   19165 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211214195818-1964
	I1214 20:26:27.505888   19165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49459 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/kindnet-20211214195818-1964/id_rsa Username:docker}
	I1214 20:26:27.505884   19165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49459 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/kindnet-20211214195818-1964/id_rsa Username:docker}
	I1214 20:26:27.593235   19165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1214 20:26:28.065841   19165 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1214 20:26:28.076910   19165 cruntime.go:257] skipping containerd shutdown because we are bound to it
	I1214 20:26:28.076966   19165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1214 20:26:28.086872   19165 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1214 20:26:28.100597   19165 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1214 20:26:28.163137   19165 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1214 20:26:28.220862   19165 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1214 20:26:28.232091   19165 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1214 20:26:28.293239   19165 ssh_runner.go:195] Run: sudo systemctl start docker
	I1214 20:26:28.305175   19165 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1214 20:26:28.347297   19165 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1214 20:26:28.440879   19165 out.go:203] * Preparing Kubernetes v1.22.4 on Docker 20.10.11 ...
	I1214 20:26:28.441100   19165 cli_runner.go:115] Run: docker exec -t kindnet-20211214195818-1964 dig +short host.docker.internal
	I1214 20:26:28.622847   19165 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1214 20:26:28.622940   19165 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1214 20:26:28.627758   19165 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1214 20:26:28.638999   19165 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-20211214195818-1964
	I1214 20:26:28.753566   19165 preload.go:132] Checking if preload exists for k8s version v1.22.4 and runtime docker
	I1214 20:26:28.753646   19165 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1214 20:26:28.784388   19165 docker.go:558] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.22.4
	k8s.gcr.io/kube-scheduler:v1.22.4
	k8s.gcr.io/kube-controller-manager:v1.22.4
	k8s.gcr.io/kube-proxy:v1.22.4
	kubernetesui/dashboard:v2.3.1
	k8s.gcr.io/etcd:3.5.0-0
	kubernetesui/metrics-scraper:v1.0.7
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	
	-- /stdout --
	I1214 20:26:28.784401   19165 docker.go:489] Images already preloaded, skipping extraction
	I1214 20:26:28.784500   19165 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1214 20:26:28.816746   19165 docker.go:558] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.22.4
	k8s.gcr.io/kube-scheduler:v1.22.4
	k8s.gcr.io/kube-controller-manager:v1.22.4
	k8s.gcr.io/kube-proxy:v1.22.4
	kubernetesui/dashboard:v2.3.1
	k8s.gcr.io/etcd:3.5.0-0
	kubernetesui/metrics-scraper:v1.0.7
	k8s.gcr.io/coredns/coredns:v1.8.4
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.5
	
	-- /stdout --
	I1214 20:26:28.816772   19165 cache_images.go:79] Images are preloaded, skipping loading
	I1214 20:26:28.816864   19165 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1214 20:26:28.898187   19165 cni.go:93] Creating CNI manager for "kindnet"
	I1214 20:26:28.898209   19165 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1214 20:26:28.898226   19165 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.22.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-20211214195818-1964 NodeName:kindnet-20211214195818-1964 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/
minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I1214 20:26:28.898377   19165 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kindnet-20211214195818-1964"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1214 20:26:28.898476   19165 kubeadm.go:927] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kindnet-20211214195818-1964 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.4 ClusterName:kindnet-20211214195818-1964 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I1214 20:26:28.898542   19165 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.22.4
	I1214 20:26:28.906529   19165 binaries.go:44] Found k8s binaries, skipping transfer
	I1214 20:26:28.906593   19165 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1214 20:26:28.913788   19165 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (405 bytes)
	I1214 20:26:28.927282   19165 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1214 20:26:28.939927   19165 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2070 bytes)
	I1214 20:26:28.954083   19165 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1214 20:26:28.957965   19165 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1214 20:26:28.967211   19165 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964 for IP: 192.168.58.2
	I1214 20:26:28.967343   19165 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.key
	I1214 20:26:28.967398   19165 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/proxy-client-ca.key
	I1214 20:26:28.967452   19165 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/client.key
	I1214 20:26:28.967470   19165 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/client.crt with IP's: []
	I1214 20:26:29.190914   19165 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/client.crt ...
	I1214 20:26:29.200583   19165 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/client.crt: {Name:mkffe12f27ee69fe890e7e9e368fe4c433fa30bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1214 20:26:29.201444   19165 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/client.key ...
	I1214 20:26:29.201461   19165 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/client.key: {Name:mk46fcb0091dc2b1fb49b028c59acf23186b2dfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1214 20:26:29.201823   19165 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/apiserver.key.cee25041
	I1214 20:26:29.201865   19165 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1214 20:26:29.340723   19165 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/apiserver.crt.cee25041 ...
	I1214 20:26:29.340737   19165 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/apiserver.crt.cee25041: {Name:mk11a4bcaebd2f48448d27b3cfb803af38070711 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1214 20:26:29.341010   19165 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/apiserver.key.cee25041 ...
	I1214 20:26:29.341018   19165 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/apiserver.key.cee25041: {Name:mkf7c82c3930e4a6bdd0fead80e89f0a9160227a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1214 20:26:29.341194   19165 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/apiserver.crt
	I1214 20:26:29.341358   19165 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/apiserver.key
	I1214 20:26:29.341512   19165 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/proxy-client.key
	I1214 20:26:29.341530   19165 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/proxy-client.crt with IP's: []
	I1214 20:26:29.416580   19165 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/proxy-client.crt ...
	I1214 20:26:29.416600   19165 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/proxy-client.crt: {Name:mk3ea7ecc36504d81977ff423f932113d21a5892 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1214 20:26:29.416918   19165 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/proxy-client.key ...
	I1214 20:26:29.416927   19165 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/proxy-client.key: {Name:mked58c02d0c613159ce50f72edaa97531c71a9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1214 20:26:29.417330   19165 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/1964.pem (1338 bytes)
	W1214 20:26:29.417380   19165 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/1964_empty.pem, impossibly tiny 0 bytes
	I1214 20:26:29.417390   19165 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca-key.pem (1675 bytes)
	I1214 20:26:29.417431   19165 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/ca.pem (1078 bytes)
	I1214 20:26:29.417468   19165 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/cert.pem (1123 bytes)
	I1214 20:26:29.417503   19165 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/key.pem (1675 bytes)
	I1214 20:26:29.417923   19165 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files/etc/ssl/certs/19642.pem (1708 bytes)
	I1214 20:26:29.418736   19165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1214 20:26:29.444637   19165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1214 20:26:29.463397   19165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1214 20:26:29.487736   19165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kindnet-20211214195818-1964/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1214 20:26:29.509007   19165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1214 20:26:29.526182   19165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1214 20:26:29.545391   19165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1214 20:26:29.561957   19165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1214 20:26:29.579438   19165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1214 20:26:29.597864   19165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/certs/1964.pem --> /usr/share/ca-certificates/1964.pem (1338 bytes)
	I1214 20:26:29.615764   19165 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files/etc/ssl/certs/19642.pem --> /usr/share/ca-certificates/19642.pem (1708 bytes)
	I1214 20:26:29.634361   19165 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1214 20:26:29.647721   19165 ssh_runner.go:195] Run: openssl version
	I1214 20:26:29.653468   19165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1214 20:26:29.662305   19165 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1214 20:26:29.667076   19165 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Dec 15 03:07 /usr/share/ca-certificates/minikubeCA.pem
	I1214 20:26:29.667129   19165 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1214 20:26:29.672736   19165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1214 20:26:29.680740   19165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1964.pem && ln -fs /usr/share/ca-certificates/1964.pem /etc/ssl/certs/1964.pem"
	I1214 20:26:29.690310   19165 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1964.pem
	I1214 20:26:29.694499   19165 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Dec 15 03:13 /usr/share/ca-certificates/1964.pem
	I1214 20:26:29.694556   19165 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1964.pem
	I1214 20:26:29.700337   19165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1964.pem /etc/ssl/certs/51391683.0"
	I1214 20:26:29.710445   19165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19642.pem && ln -fs /usr/share/ca-certificates/19642.pem /etc/ssl/certs/19642.pem"
	I1214 20:26:29.718250   19165 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19642.pem
	I1214 20:26:29.722692   19165 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Dec 15 03:13 /usr/share/ca-certificates/19642.pem
	I1214 20:26:29.722761   19165 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19642.pem
	I1214 20:26:29.729258   19165 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19642.pem /etc/ssl/certs/3ec20f2e.0"
	I1214 20:26:29.737741   19165 kubeadm.go:390] StartCluster: {Name:kindnet-20211214195818-1964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4 ClusterName:kindnet-20211214195818-1964 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.22.4 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1214 20:26:29.738547   19165 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1214 20:26:29.769303   19165 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1214 20:26:29.777387   19165 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1214 20:26:29.785257   19165 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I1214 20:26:29.785318   19165 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1214 20:26:29.792872   19165 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1214 20:26:29.792898   19165 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1214 20:26:30.337040   19165 out.go:203]   - Generating certificates and keys ...
	I1214 20:26:32.488004   19165 out.go:203]   - Booting up control plane ...
	I1214 20:26:47.551959   19165 out.go:203]   - Configuring RBAC rules ...
	I1214 20:26:47.945093   19165 cni.go:93] Creating CNI manager for "kindnet"
	I1214 20:26:47.974010   19165 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I1214 20:26:47.974089   19165 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1214 20:26:47.981145   19165 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.4/kubectl ...
	I1214 20:26:47.981156   19165 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1214 20:26:48.019796   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1214 20:26:48.767708   19165 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1214 20:26:48.767801   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=1bfd93799ca1a0aa711376fa94919427c19ad092 minikube.k8s.io/name=kindnet-20211214195818-1964 minikube.k8s.io/updated_at=2021_12_14T20_26_48_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:26:48.767803   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:26:48.780986   19165 ops.go:34] apiserver oom_adj: -16
	I1214 20:26:48.857039   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:26:49.429688   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:26:49.928612   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:26:50.428551   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:26:50.932326   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:26:51.432572   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:26:51.928570   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:26:52.428553   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:26:52.928618   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:26:53.431656   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:26:53.928543   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:26:54.432552   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:26:54.932763   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:26:55.431489   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:26:55.928570   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:26:56.428637   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:26:56.935832   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:26:57.435761   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:26:57.934500   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:26:58.434506   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:26:58.933310   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:26:59.433230   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:26:59.932028   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:27:00.431948   19165 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.22.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1214 20:27:00.628291   19165 kubeadm.go:1003] duration metric: took 11.860494955s to wait for elevateKubeSystemPrivileges.
	I1214 20:27:00.628310   19165 kubeadm.go:392] StartCluster complete in 30.890424879s
	I1214 20:27:00.628334   19165 settings.go:142] acquiring lock: {Name:mk93abdcbbc46dc3353c37938fd5d548af35ef3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1214 20:27:00.628444   19165 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig
	I1214 20:27:00.629145   19165 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig: {Name:mk605b877d3a6907cdf2ed75edbb40b36491c1e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1214 20:27:01.157650   19165 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kindnet-20211214195818-1964" rescaled to 1
	I1214 20:27:01.157682   19165 start.go:207] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.22.4 ControlPlane:true Worker:true}
	I1214 20:27:01.157697   19165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1214 20:27:01.157714   19165 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I1214 20:27:01.157763   19165 addons.go:65] Setting storage-provisioner=true in profile "kindnet-20211214195818-1964"
	I1214 20:27:01.185262   19165 out.go:176] * Verifying Kubernetes components...
	I1214 20:27:01.157784   19165 addons.go:65] Setting default-storageclass=true in profile "kindnet-20211214195818-1964"
	I1214 20:27:01.185293   19165 addons.go:153] Setting addon storage-provisioner=true in "kindnet-20211214195818-1964"
	W1214 20:27:01.185303   19165 addons.go:165] addon storage-provisioner should already be in state true
	I1214 20:27:01.185306   19165 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-20211214195818-1964"
	I1214 20:27:01.157860   19165 config.go:176] Loaded profile config "kindnet-20211214195818-1964": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4
	I1214 20:27:01.185332   19165 host.go:66] Checking if "kindnet-20211214195818-1964" exists ...
	I1214 20:27:01.185338   19165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1214 20:27:01.185637   19165 cli_runner.go:115] Run: docker container inspect kindnet-20211214195818-1964 --format={{.State.Status}}
	I1214 20:27:01.185736   19165 cli_runner.go:115] Run: docker container inspect kindnet-20211214195818-1964 --format={{.State.Status}}
	I1214 20:27:01.216712   19165 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-20211214195818-1964
	I1214 20:27:01.216742   19165 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1214 20:27:01.338016   19165 addons.go:153] Setting addon default-storageclass=true in "kindnet-20211214195818-1964"
	I1214 20:27:01.358798   19165 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1214 20:27:01.358814   19165 addons.go:165] addon default-storageclass should already be in state true
	I1214 20:27:01.358848   19165 host.go:66] Checking if "kindnet-20211214195818-1964" exists ...
	I1214 20:27:01.358933   19165 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1214 20:27:01.358942   19165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1214 20:27:01.359017   19165 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211214195818-1964
	I1214 20:27:01.359819   19165 cli_runner.go:115] Run: docker container inspect kindnet-20211214195818-1964 --format={{.State.Status}}
	I1214 20:27:01.375339   19165 node_ready.go:35] waiting up to 5m0s for node "kindnet-20211214195818-1964" to be "Ready" ...
	I1214 20:27:01.417562   19165 start.go:774] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I1214 20:27:01.578372   19165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49459 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/kindnet-20211214195818-1964/id_rsa Username:docker}
	I1214 20:27:01.578409   19165 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I1214 20:27:01.578418   19165 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1214 20:27:01.578486   19165 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20211214195818-1964
	I1214 20:27:01.685973   19165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1214 20:27:01.706236   19165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49459 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/kindnet-20211214195818-1964/id_rsa Username:docker}
	I1214 20:27:01.819193   19165 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1214 20:27:02.043224   19165 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I1214 20:27:02.043248   19165 addons.go:417] enableAddons completed in 885.541557ms
	I1214 20:27:03.383123   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:27:05.385465   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:27:07.882854   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:27:09.883043   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:27:11.884385   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:27:13.884713   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:27:16.385011   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:27:18.887507   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:27:21.384849   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:27:23.884274   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:27:25.889701   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:27:28.387126   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:27:30.387790   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:27:32.883824   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:27:34.885835   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:27:37.386373   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:27:39.393857   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:27:41.886141   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:27:43.887863   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:27:46.384624   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:27:48.387058   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:27:50.389391   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:27:52.394871   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:27:54.883961   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:27:56.884003   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:27:58.888917   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:01.386624   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:03.890109   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:06.389763   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:08.888903   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:10.890469   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:13.383322   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:15.386952   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:17.883460   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:19.883826   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:22.386150   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:24.885685   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:26.888393   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:29.383321   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:31.383367   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:33.388465   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:35.884999   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:38.391917   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:40.884125   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:42.885232   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:44.887247   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:47.384795   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:49.385646   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:51.887309   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:54.383750   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:56.391581   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:28:58.887733   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:29:01.391588   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:29:03.887188   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:29:06.386034   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:29:08.885627   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:29:11.388870   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:29:13.885523   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:29:15.888136   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:29:18.387442   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:29:20.895053   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:29:23.389939   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:29:25.393661   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:29:27.886319   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:29:29.890926   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:29:31.891733   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:29:34.385826   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:29:36.894995   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:29:39.395442   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:29:41.886550   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:29:43.889939   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:29:46.386424   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:29:48.386789   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:29:50.884289   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:29:53.385195   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:29:55.387552   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:29:57.891071   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:00.384475   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:02.389906   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:04.885604   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:06.885758   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:09.389790   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:11.885954   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:13.888720   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:16.388132   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:18.884210   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:20.884296   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:22.887871   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:25.386034   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:27.885635   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:30.388865   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:32.884332   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:35.387305   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:37.886563   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:40.387001   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:42.887214   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:45.385599   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:47.884051   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:49.887830   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:52.384608   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:54.385975   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:56.886927   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:30:59.385658   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:31:01.389145   19165 node_ready.go:58] node "kindnet-20211214195818-1964" has status "Ready":"False"
	I1214 20:31:01.392116   19165 node_ready.go:38] duration metric: took 4m0.015576222s waiting for node "kindnet-20211214195818-1964" to be "Ready" ...
	I1214 20:31:01.435228   19165 out.go:176] 
	W1214 20:31:01.435377   19165 out.go:241] X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W1214 20:31:01.435393   19165 out.go:241] * 
	* 
	W1214 20:31:01.436438   19165 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1214 20:31:01.513690   19165 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (327.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (359.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
E1214 20:28:11.963956    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/cilium-20211214195818-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.153769079s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.134617002s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.14912157s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.161169859s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
E1214 20:29:26.107616    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/custom-weave-20211214195818-1964/client.crt: no such file or directory
E1214 20:29:27.136772    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/auto-20211214195817-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133220917s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1214 20:29:37.808025    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
E1214 20:29:53.969450    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/custom-weave-20211214195818-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.151461445s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.149768312s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1214 20:30:21.149284    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/false-20211214195818-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12255906s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1214 20:30:44.853032    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/enable-default-cni-20211214195817-1964/client.crt: no such file or directory
E1214 20:30:44.858349    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/enable-default-cni-20211214195817-1964/client.crt: no such file or directory
E1214 20:30:44.868569    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/enable-default-cni-20211214195817-1964/client.crt: no such file or directory
E1214 20:30:44.888695    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/enable-default-cni-20211214195817-1964/client.crt: no such file or directory
E1214 20:30:44.936934    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/enable-default-cni-20211214195817-1964/client.crt: no such file or directory
E1214 20:30:44.964777    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
E1214 20:30:45.024197    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/enable-default-cni-20211214195817-1964/client.crt: no such file or directory
E1214 20:30:45.190395    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/enable-default-cni-20211214195817-1964/client.crt: no such file or directory
E1214 20:30:45.513217    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/enable-default-cni-20211214195817-1964/client.crt: no such file or directory
E1214 20:30:46.159574    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/enable-default-cni-20211214195817-1964/client.crt: no such file or directory
E1214 20:30:47.441561    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/enable-default-cni-20211214195817-1964/client.crt: no such file or directory
E1214 20:30:50.003395    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/enable-default-cni-20211214195817-1964/client.crt: no such file or directory
E1214 20:30:55.130600    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/enable-default-cni-20211214195817-1964/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.235179521s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
E1214 20:32:06.823644    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/enable-default-cni-20211214195817-1964/client.crt: no such file or directory
E1214 20:32:08.052445    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12387863s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1214 20:32:44.255069    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/cilium-20211214195818-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.142901849s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1214 20:33:28.744278    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/enable-default-cni-20211214195817-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context bridge-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13106823s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (359.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (7200.563s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
E1214 20:37:40.925760    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140590248s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
E1214 20:37:44.254967    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/cilium-20211214195818-1964/client.crt: no such file or directory
E1214 20:37:53.641476    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/bridge-20211214195817-1964/client.crt: no such file or directory
E1214 20:37:53.648394    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/bridge-20211214195817-1964/client.crt: no such file or directory
E1214 20:37:53.658912    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/bridge-20211214195817-1964/client.crt: no such file or directory
E1214 20:37:53.683345    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/bridge-20211214195817-1964/client.crt: no such file or directory
E1214 20:37:53.724397    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/bridge-20211214195817-1964/client.crt: no such file or directory
E1214 20:37:53.804599    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/bridge-20211214195817-1964/client.crt: no such file or directory
E1214 20:37:53.968431    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/bridge-20211214195817-1964/client.crt: no such file or directory
E1214 20:37:54.289237    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/bridge-20211214195817-1964/client.crt: no such file or directory
E1214 20:37:54.931466    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/bridge-20211214195817-1964/client.crt: no such file or directory
E1214 20:37:56.216971    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/bridge-20211214195817-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.142819837s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1214 20:37:58.782453    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/bridge-20211214195817-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
E1214 20:38:03.903345    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/bridge-20211214195817-1964/client.crt: no such file or directory
E1214 20:38:14.149673    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/bridge-20211214195817-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.159477173s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: signal: killed (701.346872ms)
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (1.15µs)
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (1.238µs)
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (990ns)
E1214 20:38:34.635142    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/bridge-20211214195817-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (844ns)
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (782ns)
E1214 20:39:07.329360    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/cilium-20211214195818-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (1.761µs)
E1214 20:39:15.600015    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/bridge-20211214195817-1964/client.crt: no such file or directory
E1214 20:39:26.100598    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/custom-weave-20211214195818-1964/client.crt: no such file or directory
E1214 20:39:27.135875    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/auto-20211214195817-1964/client.crt: no such file or directory
E1214 20:39:37.810169    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (1.502µs)
E1214 20:40:21.157037    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/false-20211214195818-1964/client.crt: no such file or directory
E1214 20:40:37.522366    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/bridge-20211214195817-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (1.19µs)
E1214 20:40:44.856085    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/enable-default-cni-20211214195817-1964/client.crt: no such file or directory
E1214 20:40:44.965398    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
E1214 20:40:49.333045    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/custom-weave-20211214195818-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (1.321µs)
E1214 20:41:46.038047    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
E1214 20:41:51.244414    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/skaffold-20211214195511-1964/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (1.175µs)
net_test.go:169: failed to do nslookup on kubernetes.default: context deadline exceeded
net_test.go:174: failed nslookup: got="", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/kubenet/DNS (306.33s)
E1214 21:02:44.317112    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/cilium-20211214195818-1964/client.crt: no such file or directory
E1214 21:02:53.693588    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/bridge-20211214195817-1964/client.crt: no such file or directory
E1214 21:03:34.754856    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/old-k8s-version-20211214203429-1964/client.crt: no such file or directory
E1214 21:03:36.701987    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/kubenet-20211214195817-1964/client.crt: no such file or directory
E1214 21:03:47.922177    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/no-preload-20211214204251-1964/client.crt: no such file or directory
E1214 21:03:47.928988    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/no-preload-20211214204251-1964/client.crt: no such file or directory
E1214 21:03:47.944184    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/no-preload-20211214204251-1964/client.crt: no such file or directory
E1214 21:03:47.964897    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/no-preload-20211214204251-1964/client.crt: no such file or directory
E1214 21:03:48.010728    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/no-preload-20211214204251-1964/client.crt: no such file or directory
E1214 21:03:48.010801    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/enable-default-cni-20211214195817-1964/client.crt: no such file or directory
E1214 21:03:48.090950    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/no-preload-20211214204251-1964/client.crt: no such file or directory
E1214 21:03:48.261121    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/no-preload-20211214204251-1964/client.crt: no such file or directory
E1214 21:03:48.581701    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/no-preload-20211214204251-1964/client.crt: no such file or directory
E1214 21:03:49.225767    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/no-preload-20211214204251-1964/client.crt: no such file or directory
E1214 21:03:50.509803    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/no-preload-20211214204251-1964/client.crt: no such file or directory
E1214 21:03:53.076313    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/no-preload-20211214204251-1964/client.crt: no such file or directory
E1214 21:03:58.200635    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/no-preload-20211214204251-1964/client.crt: no such file or directory
E1214 21:04:08.443878    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/no-preload-20211214204251-1964/client.crt: no such file or directory
E1214 21:04:26.160051    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/custom-weave-20211214195818-1964/client.crt: no such file or directory
E1214 21:04:27.200353    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/auto-20211214195817-1964/client.crt: no such file or directory
E1214 21:04:28.934347    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/no-preload-20211214204251-1964/client.crt: no such file or directory
E1214 21:04:37.869400    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
E1214 21:05:09.898792    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/no-preload-20211214204251-1964/client.crt: no such file or directory
E1214 21:05:21.212759    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/false-20211214195818-1964/client.crt: no such file or directory
E1214 21:05:28.122432    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
panic: test timed out after 2h0m0s

                                                
                                                
goroutine 3311 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:1788 +0x8e
created by time.goFunc
	/usr/local/go/src/time/sleep.go:180 +0x31

                                                
                                                
goroutine 1 [chan receive, 4 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1225 +0x311
testing.tRunner(0xc000642ea0, 0xc0006a3c48)
	/usr/local/go/src/testing/testing.go:1265 +0x13b
testing.runTests(0xc000733b80, {0x41e89c0, 0x24, 0x24}, {0xc000066198, 0xffffffffffffffff, 0x42085c0})
	/usr/local/go/src/testing/testing.go:1596 +0x43f
testing.(*M).Run(0xc000733b80)
	/usr/local/go/src/testing/testing.go:1504 +0x51d
k8s.io/minikube/test/integration.TestMain(0x1010000047b9108)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x9e
main.main()
	_testmain.go:115 +0x165

                                                
                                                
goroutine 6 [syscall]:
syscall.syscall(0x107e440, 0xa, 0x33, 0x0)
	/usr/local/go/src/runtime/sys_darwin.go:22 +0x3b
syscall.fcntl(0x119, 0x40000, 0x0)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:320 +0x30
internal/poll.(*FD).Fsync.func1(...)
	/usr/local/go/src/internal/poll/fd_fsync_darwin.go:18
internal/poll.ignoringEINTR(...)
	/usr/local/go/src/internal/poll/fd_posix.go:75
internal/poll.(*FD).Fsync(0xc000aa82f0)
	/usr/local/go/src/internal/poll/fd_fsync_darwin.go:17 +0xfc
os.(*File).Sync(0xc000aa82f0)
	/usr/local/go/src/os/file_posix.go:169 +0x4e
k8s.io/klog/v2.(*syncBuffer).Sync(0xc000de8990)
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.30.0/klog.go:1081 +0x1d
k8s.io/klog/v2.(*loggingT).flushAll(0x4208ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.30.0/klog.go:1201 +0x6f
k8s.io/klog/v2.(*loggingT).lockAndFlushAll(0x4208ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.30.0/klog.go:1189 +0x4a
k8s.io/klog/v2.(*loggingT).flushDaemon(0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.30.0/klog.go:1182 +0x5b
created by k8s.io/klog/v2.init.0
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.30.0/klog.go:420 +0xfb

                                                
                                                
goroutine 7 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00019a580)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.23.0/stats/view/worker.go:276 +0xb9
created by go.opencensus.io/stats/view.init.0
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.23.0/stats/view/worker.go:34 +0x92

                                                
                                                
goroutine 2445 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0007a9d50, 0x1b)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31791c8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0016ba5a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0007a9d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc1916e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000d84420, {0x3153620, 0xc000e1c060}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0015eefb8, 0x3b9aca00, 0x0, 0x1, 0x103ee65)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x0, 0xc0014b0600, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 3051 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x318f240, 0xc0010f4100}, 0xc0014f6168, 0x158c52a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x318f240, 0xc0010f4100}, 0x38, 0x158b9a5, 0xc000f0c260)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x318f240, 0xc0010f4100}, 0x100503d, 0xc0010a6540)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0008f4fd0, 0x117c6ed, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 2823 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 844 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 2383 [select]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 1245 [chan receive, 106 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0007a9880, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 2573 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 2399 [chan receive, 50 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000e5dac0, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 254 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000869280, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 2571 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x318f240, 0xc00110ed00}, 0xc0014f7398, 0x158c52a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x318f240, 0xc00110ed00}, 0x38, 0x158b9a5, 0xc0016365d0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x318f240, 0xc00110ed00}, 0x100503d, 0xc000d848a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0015ef7d0, 0x117c6ed, 0xc00079b380)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 2812 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x318f240, 0xc001497a80}, 0xc000673da0, 0x158c52a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x318f240, 0xc001497a80}, 0x38, 0x158b9a5, 0xc000f6a0b0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x318f240, 0xc001497a80}, 0x100503d, 0xc001082e40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0017d2fd0, 0x117c6ed, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 2830 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000a51ec0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 2380 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000e5da90, 0x1c)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31791c8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0016bbaa0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000e5dac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc151878)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x314e9c0, {0x3153620, 0xc000dc0360}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00068cfb8, 0x3b9aca00, 0x0, 0x1, 0x103ee65)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x0, 0xc0014443c0, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 2822 [select, 29 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 2530 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 253 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00035d1a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 258 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000869250, 0x2d)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31791c8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00035cfc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000869280)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc3522d0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x1a, {0x3153620, 0xc0006247e0}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x1112380, 0x3b9aca00, 0x0, 0x40, 0xc000692f80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x111300a, 0xc000782340, 0xc0007a8280)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 259 [select]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x318f240, 0xc000028040}, 0xc000051548, 0x158c52a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x318f240, 0xc000028040}, 0x38, 0x158b9a5, 0xc000662e10)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x318f240, 0xc000028040}, 0xc000643ba0, 0xc000693fb0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x111300a, 0xc000643ba0, 0xc0007a8140)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 260 [select, 117 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 261 [select]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 2685 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001497490, 0x18)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31791c8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000a349c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0014974c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc191988)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x314e9c0, {0x3153620, 0xc000e26f30}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00068d7b8, 0x3b9aca00, 0x0, 0x1, 0x103ee65)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x0, 0xc001444660, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 2802 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000a34000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 2831 [chan receive, 29 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0010cb380, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 2511 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0010caad0, 0x1b)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31791c8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000406e40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0010cab00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x4fb41f8)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x318f2b0, {0x3153620, 0xc000e54510}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x1112380, 0x3b9aca00, 0x0, 0x1, 0x103ee65)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x0, 0xc0014d53e0, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 2446 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x318f240, 0xc000e5c000}, 0xc0000502e8, 0x158c52a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x318f240, 0xc000e5c000}, 0x38, 0x158b9a5, 0xc000e6c050)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x318f240, 0xc000e5c000}, 0x0, 0xc0015f07d0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0, 0xc0015a39e0, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 1769 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x318f240, 0xc0010ca080}, 0xc0014f6150, 0x158c52a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x318f240, 0xc0010ca080}, 0x38, 0x158b9a5, 0xc0011a22a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x318f240, 0xc0010ca080}, 0x100503d, 0xc000213260)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0015ea7d0, 0x117c6ed, 0xc000730e70)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 2551 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000e50960)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 2821 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x318f240, 0xc0010ca2c0}, 0xc000050c48, 0x158c52a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x318f240, 0xc0010ca2c0}, 0x38, 0x158b9a5, 0xc000b00e70)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x318f240, 0xc0010ca2c0}, 0x0, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x1519fca, 0xc000adadc0, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 2518 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000407320)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 2529 [select, 46 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 2448 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 2382 [select, 50 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 3235 [select]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 1210 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0007a9590, 0x2a)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31791c8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0016d2a20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0007a9880)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc291ae8)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000f986c0, {0x3153620, 0xc00097c1b0}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0015ee7b8, 0x3b9aca00, 0x0, 0x1, 0x103ee65)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0xc0015ee7d0, 0x117c6ed, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 2344 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0012f5800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 2519 [chan receive, 46 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0010cab00, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 2415 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0016ba6c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 3200 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0010ca610, 0x1)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31791c8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000066360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0010ca640)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc151878)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0016d2ba0, {0x3153620, 0xc000dc1b60}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006907b8, 0x3b9aca00, 0x0, 0x1, 0x103ee65)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0xc0006907d0, 0x117c6ed, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 2686 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x318f240, 0xc00120ec80}, 0xc0014aa408, 0x158c52a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x318f240, 0xc00120ec80}, 0x38, 0x158b9a5, 0xc000f6ab50)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x318f240, 0xc00120ec80}, 0x1f, 0xc00068cfb0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x111300a, 0xc000628b60, 0xc000e26b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 1211 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x318f240, 0xc001496080}, 0xc0014f6000, 0x158c52a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x318f240, 0xc001496080}, 0x38, 0x158b9a5, 0xc000b00380)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x318f240, 0xc001496080}, 0x616e20226d657473, 0x206563617073656d)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x2220737574617473, 0x223a227964616552, 0x90a2265736c6146)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 604 [IO wait, 112 minutes]:
internal/poll.runtime_pollWait(0xc3a3548, 0x72)
	/usr/local/go/src/runtime/netpoll.go:234 +0x89
internal/poll.(*pollDesc).wait(0xc000e16680, 0x4, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000e16680)
	/usr/local/go/src/internal/poll/fd_unix.go:402 +0x22c
net.(*netFD).accept(0xc000e16680)
	/usr/local/go/src/net/fd_unix.go:173 +0x35
net.(*TCPListener).accept(0xc000f681c8)
	/usr/local/go/src/net/tcpsock_posix.go:140 +0x28
net.(*TCPListener).Accept(0xc000f681c8)
	/usr/local/go/src/net/tcpsock.go:262 +0x3d
net/http.(*Server).Serve(0xc000188700, {0x3185940, 0xc000f681c8})
	/usr/local/go/src/net/http/server.go:3002 +0x394
net/http.(*Server).ListenAndServe(0xc000188700)
	/usr/local/go/src/net/http/server.go:2931 +0x7d
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd, 0xc0005b96c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2046 +0x1e
created by k8s.io/minikube/test/integration.startHTTPProxy
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2045 +0x149

                                                
                                                
goroutine 841 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc00120fd50, 0x2b)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31791c8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0004071a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00120fd80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc153390)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00035d200, {0x3153620, 0xc000de9da0}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008f37b8, 0x3b9aca00, 0x0, 0x1, 0x103ee65)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x0, 0xc0014b0360, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 1213 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 2670 [chan receive, 39 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0014974c0, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 2813 [select, 29 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 2416 [chan receive, 48 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0007a9d80, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 843 [select, 109 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 1770 [select, 69 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 1969 [chan receive, 10 minutes]:
testing.(*T).Run(0xc000702d00, {0x2bb8da9, 0xc001217f20}, 0xc00079a280)
	/usr/local/go/src/testing/testing.go:1307 +0x375
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000702d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:115 +0x725
testing.tRunner(0xc000702d00, 0xc0010f5d00)
	/usr/local/go/src/testing/testing.go:1259 +0x102
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1306 +0x35a

                                                
                                                
goroutine 2512 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x318f240, 0xc000028340}, 0xc0014f60c0, 0x158c52a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x318f240, 0xc000028340}, 0x38, 0x158b9a5, 0xc000b009d0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x318f240, 0xc000028340}, 0x100503d, 0xc000a34720)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0015f0fd0, 0x117c6ed, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 828 [chan receive, 109 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00120fd80, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 1744 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000f98f00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 2552 [chan receive, 46 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0010f5a00, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 2688 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 1771 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 1792 [chan receive, 61 minutes]:
testing.(*T).Run(0xc00080aea0, {0x2bb6475, 0x61b9679a}, 0x2ed14b0)
	/usr/local/go/src/testing/testing.go:1307 +0x375
k8s.io/minikube/test/integration.TestStartStop(0x318f240)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:45 +0x3b
testing.tRunner(0xc00080aea0, 0x2ed14b8)
	/usr/local/go/src/testing/testing.go:1259 +0x102
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1306 +0x35a

                                                
                                                
goroutine 1118 [select, 108 minutes]:
net/http.(*persistConn).writeLoop(0xc000a150e0)
	/usr/local/go/src/net/http/transport.go:2386 +0xfb
created by net/http.(*Transport).dialConn
	/usr/local/go/src/net/http/transport.go:1748 +0x1e65

                                                
                                                
goroutine 842 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x318f240, 0xc0007a9f00}, 0xc000123d88, 0x158c52a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x318f240, 0xc0007a9f00}, 0x38, 0x158b9a5, 0xc000f55e50)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x318f240, 0xc0007a9f00}, 0x0, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0008f97d0, 0x117c6ed, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 827 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0004072c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 2314 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x318f240, 0xc0010f4000}, 0xc0001220c0, 0x158c52a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x318f240, 0xc0010f4000}, 0x38, 0x158b9a5, 0xc0016860c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x318f240, 0xc0010f4000}, 0x100503d, 0xc0016bb2c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00008dfd0, 0x117c6ed, 0xc000fc66f0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 2398 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0016bbbc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 2345 [chan receive, 52 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00110e8c0, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 1244 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0016d2b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 3018 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001083da0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 1768 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000669650, 0x20)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31791c8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000f98de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000669740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x4fb1e60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x314e9c0, {0x3153620, 0xc000ea4360}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0017d27b8, 0x3b9aca00, 0x0, 0x1, 0x103ee65)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x0, 0xc00152a060, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 3234 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 2803 [chan receive, 29 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001496240, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 2447 [select, 48 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 2381 [select]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x318f240, 0xc000668480}, 0xc0006732c0, 0x158c52a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x318f240, 0xc000668480}, 0x38, 0x158b9a5, 0xc000f6a180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x318f240, 0xc000668480}, 0x0, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0017d5fd0, 0x117c6ed, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 2687 [select, 39 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 2820 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001496210, 0x16)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31791c8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0010a7e00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001496240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc1915d0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0010a6780, {0x3153620, 0xc000e18960}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0017cf7b8, 0x3b9aca00, 0x0, 0x1, 0x103ee65)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x0, 0xc0014d45a0, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 2316 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 1777 [chan receive, 69 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000669740, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 2572 [select, 46 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 2811 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0010cb350, 0x16)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31791c8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000a519e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0010cb380)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc151a10)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x2bb953e, {0x3153620, 0xc00086bb60}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x1112380, 0x3b9aca00, 0x0, 0x1, 0x103ee65)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x0, 0xc0014d4660, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 2570 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0010f59d0, 0x1a)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31791c8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000e50840)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0010f5a00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x4ee1878)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0, {0x3153620, 0xc000fc6f60}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001444c60, 0x3b9aca00, 0x0, 0x1, 0x103ee65)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x0, 0xc0014d4d80, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 1950 [chan receive, 4 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1225 +0x311
testing.tRunner(0xc00080b380, 0x2ed14b0)
	/usr/local/go/src/testing/testing.go:1265 +0x13b
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1306 +0x35a

                                                
                                                
goroutine 2313 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00110e890, 0x1c)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31791c8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0012f56e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00110e8c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x4fb3f50)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0012cc240, {0x3153620, 0xc000fb6180}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000087fb8, 0x3b9aca00, 0x0, 0x1, 0x103ee65)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x0, 0xc001444180, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 1212 [select, 106 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 3233 [select]:
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x318f240, 0xc0010f4e00}, 0xc0014ab938, 0x158c52a)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:655 +0xe7
k8s.io/apimachinery/pkg/util/wait.poll({0x318f240, 0xc0010f4e00}, 0x38, 0x158b9a5, 0xc000b01090)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:591 +0x9a
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x318f240, 0xc0010f4e00}, 0x100503d, 0xc0015ef7d0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:542 +0x49
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0, 0xc0014443c0, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:533 +0x7c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:142 +0x326

                                                
                                                
goroutine 2669 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000a34ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 1117 [select, 108 minutes]:
net/http.(*persistConn).readLoop(0xc000a150e0)
	/usr/local/go/src/net/http/transport.go:2207 +0xd8a
created by net/http.(*Transport).dialConn
	/usr/local/go/src/net/http/transport.go:1747 +0x1e05

                                                
                                                
goroutine 2814 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 2315 [select, 52 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 3019 [chan receive, 16 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00120f200, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 3267 [syscall, 6 minutes]:
syscall.syscall6(0x107e260, 0x58f3, 0xc000a8bb04, 0x0, 0xc00086e750, 0x0, 0x0)
	/usr/local/go/src/runtime/sys_darwin.go:44 +0x3b
syscall.wait4(0xc000a8bb08, 0x100d227, 0x90, 0x2b45000)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x48
syscall.Wait4(0xc0007316b0, 0xc000a8bb3c, 0xc000a8bac0, 0x0)
	/usr/local/go/src/syscall/syscall_bsd.go:145 +0x2b
os.(*Process).wait(0xc000f5cb40)
	/usr/local/go/src/os/exec_unix.go:44 +0x77
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:132
os/exec.(*Cmd).Wait(0xc00106cc60)
	/usr/local/go/src/os/exec/exec.go:507 +0x54
os/exec.(*Cmd).Run(0xc000865400)
	/usr/local/go/src/os/exec/exec.go:341 +0x39
k8s.io/minikube/test/integration.Run(0xc0004669c0, 0xc00106cc60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:104 +0x1f5
k8s.io/minikube/test/integration.validateSecondStart({0x318f2b0, 0xc0004063c0}, 0xc0004669c0, {0xc00011e0f0, 0x2e}, {0x6451e28, 0x6451e28015ef758}, {0x61b97669, 0xc0015ef760}, {0xc00020a240, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:241 +0x186
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x45d964b800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:141 +0x72
testing.tRunner(0xc0004669c0, 0xc000b1ca00)
	/usr/local/go/src/testing/testing.go:1259 +0x102
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1306 +0x35a

                                                
                                                
goroutine 3145 [chan receive, 6 minutes]:
testing.(*T).Run(0xc00158d860, {0x2bc19fd, 0x1112233}, 0xc000b1ca00)
	/usr/local/go/src/testing/testing.go:1307 +0x375
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00158d860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:140 +0x4ae
testing.tRunner(0xc00158d860, 0xc00079a280)
	/usr/local/go/src/testing/testing.go:1259 +0x102
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1306 +0x35a

                                                
                                                
goroutine 3230 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000066480)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:231 +0x34e
created by k8s.io/client-go/util/workqueue.newDelayingQueue
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/delaying_queue.go:68 +0x23b

                                                
                                                
goroutine 3231 [chan receive, 6 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0010ca640, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:147 +0x335
created by k8s.io/client-go/transport.(*tlsTransportCache).get
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cache.go:104 +0x485

                                                
                                                
goroutine 3052 [select, 16 minutes]:
k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:298 +0x77
created by k8s.io/apimachinery/pkg/util/wait.contextForChannel
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:297 +0xc8

                                                
                                                
goroutine 3050 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00120f1d0, 0x11)
	/usr/local/go/src/runtime/sema.go:513 +0x13d
sync.(*Cond).Wait(0x31791c8)
	/usr/local/go/src/sync/cond.go:56 +0x8c
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001083c20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/util/workqueue/queue.go:151 +0x9e
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00120f200)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:156 +0x58
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc1915d0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x314e9c0, {0x3153620, 0xc000e18360}, 0x1, 0xc000064360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0017d5fb8, 0x3b9aca00, 0x0, 0x1, 0x103ee65)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(0x0, 0xc0007f9080, 0x0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:90 +0x25
created by k8s.io/client-go/transport.(*dynamicClientCert).Run
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.22.4/transport/cert_rotation.go:140 +0x26f

                                                
                                                
goroutine 3053 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:705 +0x1c9
created by k8s.io/apimachinery/pkg/util/wait.poller.func1
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.22.4/pkg/util/wait/wait.go:688 +0xcf

                                                
                                                
goroutine 3268 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0xc3a2980, 0x72)
	/usr/local/go/src/runtime/netpoll.go:234 +0x89
internal/poll.(*pollDesc).wait(0xc000d85980, 0xc00100db14, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000d85980, {0xc00100db14, 0x2ec, 0x2ec})
	/usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:32
os.(*File).Read(0xc000b0d910, {0xc00100db14, 0x1066e0e, 0xc000419ea0})
	/usr/local/go/src/os/file.go:119 +0x5e
bytes.(*Buffer).ReadFrom(0xc000e558f0, {0x31542a0, 0xc000b0d910})
	/usr/local/go/src/bytes/buffer.go:204 +0x98
io.copyBuffer({0x314e9c0, 0xc000e558f0}, {0x31542a0, 0xc000b0d910}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:409 +0x14b
io.Copy(...)
	/usr/local/go/src/io/io.go:382
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:311 +0x3a
os/exec.(*Cmd).Start.func1(0xc000e6cde0)
	/usr/local/go/src/os/exec/exec.go:441 +0x25
created by os/exec.(*Cmd).Start
	/usr/local/go/src/os/exec/exec.go:440 +0x80d

                                                
                                                
goroutine 3270 [select, 6 minutes]:
os/exec.(*Cmd).Start.func2()
	/usr/local/go/src/os/exec/exec.go:449 +0x7b
created by os/exec.(*Cmd).Start
	/usr/local/go/src/os/exec/exec.go:448 +0x7ef

                                                
                                                
goroutine 3269 [IO wait]:
internal/poll.runtime_pollWait(0xc3a3460, 0x72)
	/usr/local/go/src/runtime/netpoll.go:234 +0x89
internal/poll.(*pollDesc).wait(0xc000d85a40, 0xc00112841f, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000d85a40, {0xc00112841f, 0x79e1, 0x79e1})
	/usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:32
os.(*File).Read(0xc000b0d928, {0xc00112841f, 0xc00041fea0, 0xc00041fea0})
	/usr/local/go/src/os/file.go:119 +0x5e
bytes.(*Buffer).ReadFrom(0xc000e55920, {0x31542a0, 0xc000b0d928})
	/usr/local/go/src/bytes/buffer.go:204 +0x98
io.copyBuffer({0x314e9c0, 0xc000e55920}, {0x31542a0, 0xc000b0d928}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:409 +0x14b
io.Copy(...)
	/usr/local/go/src/io/io.go:382
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:311 +0x3a
os/exec.(*Cmd).Start.func1(0xc000240380)
	/usr/local/go/src/os/exec/exec.go:441 +0x25
created by os/exec.(*Cmd).Start
	/usr/local/go/src/os/exec/exec.go:440 +0x80d

                                                
                                    

Test pass (200/226)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 25.72
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.28
10 TestDownloadOnly/v1.22.4/json-events 8.14
11 TestDownloadOnly/v1.22.4/preload-exists 0
14 TestDownloadOnly/v1.22.4/kubectl 0
15 TestDownloadOnly/v1.22.4/LogsDuration 0.28
17 TestDownloadOnly/v1.23.0-rc.1/json-events 8.21
18 TestDownloadOnly/v1.23.0-rc.1/preload-exists 0
21 TestDownloadOnly/v1.23.0-rc.1/kubectl 0
22 TestDownloadOnly/v1.23.0-rc.1/LogsDuration 0.28
23 TestDownloadOnly/DeleteAll 1.12
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.63
25 TestDownloadOnlyKic 10.96
26 TestOffline 116.51
28 TestAddons/Setup 188.66
32 TestAddons/parallel/MetricsServer 5.78
33 TestAddons/parallel/HelmTiller 11.12
34 TestAddons/parallel/Olm 40.74
35 TestAddons/parallel/CSI 44.34
37 TestAddons/serial/GCPAuth 18.42
38 TestAddons/StoppedEnableDisable 18.35
39 TestCertOptions 102.51
40 TestCertExpiration 292.51
41 TestDockerFlags 82.62
42 TestForceSystemdFlag 70.32
43 TestForceSystemdEnv 68.21
45 TestHyperKitDriverInstallOrUpdate 7.56
48 TestErrorSpam/setup 72.64
49 TestErrorSpam/start 2.29
50 TestErrorSpam/status 1.92
51 TestErrorSpam/pause 2.14
52 TestErrorSpam/unpause 2.12
53 TestErrorSpam/stop 18.12
56 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/StartWithProxy 123.95
58 TestFunctional/serial/AuditLog 0
59 TestFunctional/serial/SoftStart 7.51
60 TestFunctional/serial/KubeContext 0.04
61 TestFunctional/serial/KubectlGetPods 1.63
64 TestFunctional/serial/CacheCmd/cache/add_remote 9.49
65 TestFunctional/serial/CacheCmd/cache/add_local 2.1
66 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
67 TestFunctional/serial/CacheCmd/cache/list 0.07
68 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.72
69 TestFunctional/serial/CacheCmd/cache/cache_reload 3.99
70 TestFunctional/serial/CacheCmd/cache/delete 0.14
71 TestFunctional/serial/MinikubeKubectlCmd 0.47
72 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.56
75 TestFunctional/serial/LogsCmd 3.23
76 TestFunctional/serial/LogsFileCmd 3.11
78 TestFunctional/parallel/ConfigCmd 0.39
79 TestFunctional/parallel/DashboardCmd 3.06
80 TestFunctional/parallel/DryRun 1.49
81 TestFunctional/parallel/InternationalLanguage 0.62
82 TestFunctional/parallel/StatusCmd 2.3
86 TestFunctional/parallel/AddonsCmd 0.28
87 TestFunctional/parallel/PersistentVolumeClaim 29.79
89 TestFunctional/parallel/SSHCmd 1.27
90 TestFunctional/parallel/CpCmd 2.52
91 TestFunctional/parallel/MySQL 28
92 TestFunctional/parallel/FileSync 0.71
93 TestFunctional/parallel/CertSync 3.99
97 TestFunctional/parallel/NodeLabels 0.05
99 TestFunctional/parallel/NonActiveRuntimeDisabled 0.66
101 TestFunctional/parallel/Version/short 0.09
102 TestFunctional/parallel/Version/components 1.23
103 TestFunctional/parallel/ImageCommands/ImageList 0.47
104 TestFunctional/parallel/ImageCommands/ImageBuild 4.26
105 TestFunctional/parallel/ImageCommands/Setup 4.2
106 TestFunctional/parallel/DockerEnv/bash 2.56
107 TestFunctional/parallel/UpdateContextCmd/no_changes 0.37
108 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.98
109 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.4
110 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.62
111 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.74
112 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.74
113 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.91
114 TestFunctional/parallel/ImageCommands/ImageRemove 0.99
115 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.28
116 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.79
117 TestFunctional/parallel/ProfileCmd/profile_not_create 0.84
118 TestFunctional/parallel/ProfileCmd/profile_list 0.76
119 TestFunctional/parallel/ProfileCmd/profile_json_output 0.87
121 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.2
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 12.59
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
130 TestFunctional/parallel/MountCmd/any-port 9.93
131 TestFunctional/parallel/MountCmd/specific-port 3.54
132 TestFunctional/delete_addon-resizer_images 0.39
133 TestFunctional/delete_my-image_image 0.12
134 TestFunctional/delete_minikube_cached_images 0.12
137 TestIngressAddonLegacy/StartLegacyK8sCluster 134.82
139 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 16.35
140 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.62
144 TestJSONOutput/start/Command 125.54
145 TestJSONOutput/start/Audit 0
147 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
148 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
150 TestJSONOutput/pause/Command 0.95
151 TestJSONOutput/pause/Audit 0
153 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
154 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
156 TestJSONOutput/unpause/Command 0.77
157 TestJSONOutput/unpause/Audit 0
159 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
160 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
162 TestJSONOutput/stop/Command 18.05
163 TestJSONOutput/stop/Audit 0
165 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
167 TestErrorJSONOutput 0.76
169 TestKicCustomNetwork/create_custom_network 89.58
170 TestKicCustomNetwork/use_default_bridge_network 73.59
171 TestKicExistingNetwork 87.42
172 TestMainNoArgs 0.07
175 TestMountStart/serial/StartWithMountFirst 74.79
176 TestMountStart/serial/StartWithMountSecond 73.83
177 TestMountStart/serial/VerifyMountFirst 0.63
178 TestMountStart/serial/VerifyMountSecond 0.63
179 TestMountStart/serial/DeleteFirst 11.62
180 TestMountStart/serial/VerifyMountPostDelete 0.63
181 TestMountStart/serial/Stop 17.19
182 TestMountStart/serial/RestartStopped 48.31
183 TestMountStart/serial/VerifyMountPostStop 0.62
186 TestMultiNode/serial/FreshStart2Nodes 228.43
187 TestMultiNode/serial/DeployApp2Nodes 6.49
188 TestMultiNode/serial/PingHostFrom2Pods 0.84
189 TestMultiNode/serial/AddNode 113.9
190 TestMultiNode/serial/ProfileList 0.68
191 TestMultiNode/serial/CopyFile 22.64
192 TestMultiNode/serial/StopNode 10.36
193 TestMultiNode/serial/StartAfterStop 50.7
194 TestMultiNode/serial/RestartKeepsNodes 249.28
195 TestMultiNode/serial/DeleteNode 17.38
196 TestMultiNode/serial/StopMultiNode 35.39
197 TestMultiNode/serial/RestartMultiNode 148.08
198 TestMultiNode/serial/ValidateNameConflict 104.77
202 TestPreload 207.42
204 TestScheduledStopUnix 154.8
205 TestSkaffold 122.43
207 TestInsufficientStorage 63.47
208 TestRunningBinaryUpgrade 130.92
210 TestKubernetesUpgrade 233.26
211 TestMissingContainerUpgrade 171.37
223 TestStoppedBinaryUpgrade/Setup 0.88
224 TestStoppedBinaryUpgrade/Upgrade 148.38
225 TestStoppedBinaryUpgrade/MinikubeLogs 3.32
234 TestPause/serial/Start 113.96
235 TestPause/serial/SecondStartNoReconfiguration 8.08
237 TestNoKubernetes/serial/StartNoK8sWithVersion 0.35
238 TestNoKubernetes/serial/StartWithK8s 56.39
239 TestPause/serial/Pause 0.97
240 TestPause/serial/VerifyStatus 0.75
241 TestPause/serial/Unpause 1.35
242 TestPause/serial/PauseAgain 1.11
243 TestPause/serial/DeletePaused 13.28
244 TestPause/serial/VerifyDeletedResources 1.02
245 TestNoKubernetes/serial/StartWithStopK8s 22.75
246 TestNoKubernetes/serial/Start 37.62
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 9.22
248 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 11.29
249 TestNoKubernetes/serial/VerifyK8sNotRunning 0.6
250 TestNoKubernetes/serial/ProfileList 1.42
251 TestNoKubernetes/serial/Stop 8.16
252 TestNoKubernetes/serial/StartNoArgs 19.47
253 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.61
254 TestNetworkPlugins/group/auto/Start 123.47
255 TestNetworkPlugins/group/false/Start 113.18
256 TestNetworkPlugins/group/auto/KubeletFlags 0.67
257 TestNetworkPlugins/group/auto/NetCatPod 15.19
258 TestNetworkPlugins/group/auto/DNS 0.15
259 TestNetworkPlugins/group/auto/Localhost 0.13
260 TestNetworkPlugins/group/auto/HairPin 5.15
261 TestNetworkPlugins/group/cilium/Start 172.89
262 TestNetworkPlugins/group/false/KubeletFlags 0.66
263 TestNetworkPlugins/group/false/NetCatPod 14.82
264 TestNetworkPlugins/group/false/DNS 0.15
265 TestNetworkPlugins/group/false/Localhost 0.15
266 TestNetworkPlugins/group/false/HairPin 5.14
268 TestNetworkPlugins/group/cilium/ControllerPod 5.03
269 TestNetworkPlugins/group/cilium/KubeletFlags 0.65
270 TestNetworkPlugins/group/cilium/NetCatPod 16.42
271 TestNetworkPlugins/group/cilium/DNS 0.16
272 TestNetworkPlugins/group/cilium/Localhost 0.13
273 TestNetworkPlugins/group/cilium/HairPin 0.14
274 TestNetworkPlugins/group/custom-weave/Start 69.05
275 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.67
276 TestNetworkPlugins/group/custom-weave/NetCatPod 13.96
277 TestNetworkPlugins/group/enable-default-cni/Start 55.46
278 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.66
279 TestNetworkPlugins/group/enable-default-cni/NetCatPod 15.88
282 TestNetworkPlugins/group/bridge/Start 84.02
283 TestNetworkPlugins/group/bridge/KubeletFlags 0.69
284 TestNetworkPlugins/group/bridge/NetCatPod 17.79
286 TestNetworkPlugins/group/kubenet/Start 347.35
290 TestNetworkPlugins/group/kubenet/KubeletFlags 1
291 TestNetworkPlugins/group/kubenet/NetCatPod 14.92
x
+
TestDownloadOnly/v1.16.0/json-events (25.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20211214190531-1964 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20211214190531-1964 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (25.720195143s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (25.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20211214190531-1964
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20211214190531-1964: exit status 85 (276.142242ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/14 19:05:31
	Running on machine: 37309
	Binary: Built with gc go1.17.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1214 19:05:31.395347    1972 out.go:297] Setting OutFile to fd 1 ...
	I1214 19:05:31.395474    1972 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1214 19:05:31.395479    1972 out.go:310] Setting ErrFile to fd 2...
	I1214 19:05:31.395482    1972 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1214 19:05:31.395559    1972 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/bin
	W1214 19:05:31.395642    1972 root.go:293] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/config/config.json: no such file or directory
	I1214 19:05:31.396087    1972 out.go:304] Setting JSON to true
	I1214 19:05:31.423641    1972 start.go:112] hostinfo: {"hostname":"37309.local","uptime":307,"bootTime":1639537224,"procs":315,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1214 19:05:31.423737    1972 start.go:120] gopshost.Virtualization returned error: not implemented yet
	W1214 19:05:31.452112    1972 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/preloaded-tarball: no such file or directory
	I1214 19:05:31.452153    1972 notify.go:174] Checking for updates...
	I1214 19:05:31.478453    1972 driver.go:344] Setting default libvirt URI to qemu:///system
	W1214 19:05:31.564249    1972 docker.go:108] docker version returned error: exit status 1
	I1214 19:05:31.590243    1972 start.go:280] selected driver: docker
	I1214 19:05:31.590261    1972 start.go:795] validating driver "docker" against <nil>
	I1214 19:05:31.590386    1972 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1214 19:05:31.754770    1972 info.go:263] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/
local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1214 19:05:31.807637    1972 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1214 19:05:31.965797    1972 info.go:263] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/
local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1214 19:05:31.992805    1972 start_flags.go:284] no existing cluster config was found, will generate one from the flags 
	I1214 19:05:32.046371    1972 start_flags.go:365] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I1214 19:05:32.046483    1972 start_flags.go:792] Wait components to verify : map[apiserver:true system_pods:true]
	I1214 19:05:32.046501    1972 cni.go:93] Creating CNI manager for ""
	I1214 19:05:32.046513    1972 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1214 19:05:32.046526    1972 start_flags.go:298] config:
	{Name:download-only-20211214190531-1964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20211214190531-1964 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1214 19:05:32.072380    1972 cache.go:118] Beginning downloading kic base image for docker with docker
	I1214 19:05:32.098143    1972 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1214 19:05:32.098267    1972 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab in local docker daemon
	I1214 19:05:32.098397    1972 cache.go:107] acquiring lock: {Name:mk02ac1c3d7f4acf08eb88fbea8cc0086183446c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1214 19:05:32.098448    1972 cache.go:107] acquiring lock: {Name:mk3c50a2b827f625e2a5750b997c59c8275da1c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1214 19:05:32.098510    1972 cache.go:107] acquiring lock: {Name:mk8e2c1ca2c6d26b8450d9b6970d436a39752646 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1214 19:05:32.098519    1972 cache.go:107] acquiring lock: {Name:mk20339320a4a50ba7814754b6629542ff4cf532 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1214 19:05:32.099285    1972 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/download-only-20211214190531-1964/config.json ...
	I1214 19:05:32.099541    1972 cache.go:107] acquiring lock: {Name:mk3f221e29738b6ab44d0883b00d5b782709ba2e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1214 19:05:32.099582    1972 cache.go:107] acquiring lock: {Name:mkd2a85b22d2205b55bd6d13316ffe59bd2842ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1214 19:05:32.099670    1972 cache.go:107] acquiring lock: {Name:mkaceb8823078e17b37c44d1324eac7a62cf5069 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1214 19:05:32.099790    1972 cache.go:107] acquiring lock: {Name:mk665322caed1febc33e8beda45fc89f51746dcb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1214 19:05:32.099791    1972 cache.go:107] acquiring lock: {Name:mk58c6074dabbb94d1a82aa19c5254e0518ec803 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1214 19:05:32.099854    1972 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/download-only-20211214190531-1964/config.json: {Name:mk26dba7229256a2bb544a6ca3a08d1466dc4171 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1214 19:05:32.100037    1972 cache.go:107] acquiring lock: {Name:mk7c4ccf8ba8e131e31fa63cff7cd35976274c68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1214 19:05:32.100650    1972 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.16.0
	I1214 19:05:32.100817    1972 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.16.0
	I1214 19:05:32.100899    1972 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.16.0
	I1214 19:05:32.100905    1972 image.go:134] retrieving image: k8s.gcr.io/etcd:3.3.15-0
	I1214 19:05:32.100908    1972 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.2
	I1214 19:05:32.100813    1972 image.go:134] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.7
	I1214 19:05:32.101022    1972 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.16.0
	I1214 19:05:32.101124    1972 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1214 19:05:32.101134    1972 image.go:134] retrieving image: docker.io/kubernetesui/dashboard:v2.3.1
	I1214 19:05:32.101144    1972 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I1214 19:05:32.101196    1972 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1214 19:05:32.101607    1972 download.go:100] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/linux/v1.16.0/kubeadm
	I1214 19:05:32.101613    1972 download.go:100] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/linux/v1.16.0/kubectl
	I1214 19:05:32.101613    1972 download.go:100] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/linux/v1.16.0/kubelet
	I1214 19:05:32.103293    1972 image.go:180] daemon lookup for docker.io/kubernetesui/dashboard:v2.3.1: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I1214 19:05:32.103552    1972 image.go:180] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I1214 19:05:32.103771    1972 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I1214 19:05:32.103793    1972 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I1214 19:05:32.105756    1972 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.3.15-0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I1214 19:05:32.105076    1972 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.6.2: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I1214 19:05:32.105076    1972 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I1214 19:05:32.105892    1972 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I1214 19:05:32.105854    1972 image.go:180] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I1214 19:05:32.106111    1972 image.go:180] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I1214 19:05:32.209820    1972 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab to local cache
	I1214 19:05:32.210005    1972 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab in local cache directory
	I1214 19:05:32.210092    1972 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab to local cache
	I1214 19:05:33.037416    1972 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1
	I1214 19:05:33.058260    1972 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7
	I1214 19:05:33.910692    1972 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0
	I1214 19:05:34.019010    1972 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists
	I1214 19:05:34.019033    1972 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 1.920654659s
	I1214 19:05:34.019043    1972 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded
	I1214 19:05:34.026623    1972 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0
	I1214 19:05:34.131597    1972 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0
	I1214 19:05:34.136018    1972 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0
	I1214 19:05:34.146568    1972 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/pause_3.1
	I1214 19:05:34.146585    1972 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2
	I1214 19:05:34.184468    1972 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0
	I1214 19:05:34.216868    1972 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5
	I1214 19:05:34.400938    1972 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/pause_3.1 exists
	I1214 19:05:34.400961    1972 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/pause_3.1" took 2.302542839s
	I1214 19:05:34.400970    1972 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/pause_3.1 succeeded
	I1214 19:05:35.451074    1972 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 exists
	I1214 19:05:35.451097    1972 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1" took 3.351353048s
	I1214 19:05:35.451106    1972 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 succeeded
	I1214 19:05:35.724776    1972 download.go:100] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/darwin/v1.16.0/kubectl
	I1214 19:05:35.927385    1972 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1214 19:05:35.927413    1972 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 3.827867326s
	I1214 19:05:35.927421    1972 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1214 19:05:36.126480    1972 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 exists
	I1214 19:05:36.126501    1972 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.2" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2" took 4.028056293s
	I1214 19:05:36.126510    1972 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 succeeded
	I1214 19:05:37.036707    1972 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 exists
	I1214 19:05:37.036730    1972 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0" took 4.938328731s
	I1214 19:05:37.036741    1972 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 succeeded
	I1214 19:05:37.451940    1972 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 exists
	I1214 19:05:37.451959    1972 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0" took 5.353526575s
	I1214 19:05:37.451968    1972 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 succeeded
	I1214 19:05:37.809205    1972 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 exists
	I1214 19:05:37.809227    1972 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0" took 5.710857423s
	I1214 19:05:37.809236    1972 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 succeeded
	I1214 19:05:37.958741    1972 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 exists
	I1214 19:05:37.958761    1972 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0" took 5.860283263s
	I1214 19:05:37.958769    1972 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 succeeded
	I1214 19:05:38.390028    1972 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 exists
	I1214 19:05:38.390047    1972 cache.go:96] cache image "k8s.gcr.io/etcd:3.3.15-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0" took 6.29159561s
	I1214 19:05:38.390056    1972 cache.go:80] save to tar file k8s.gcr.io/etcd:3.3.15-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 succeeded
	I1214 19:05:38.390070    1972 cache.go:87] Successfully saved all images to host disk.
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20211214190531-1964"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4/json-events (8.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20211214190531-1964 --force --alsologtostderr --kubernetes-version=v1.22.4 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20211214190531-1964 --force --alsologtostderr --kubernetes-version=v1.22.4 --container-runtime=docker --driver=docker : (8.137224887s)
--- PASS: TestDownloadOnly/v1.22.4/json-events (8.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4/preload-exists
--- PASS: TestDownloadOnly/v1.22.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4/kubectl
--- PASS: TestDownloadOnly/v1.22.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20211214190531-1964
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20211214190531-1964: exit status 85 (275.356233ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/14 19:05:58
	Running on machine: 37309
	Binary: Built with gc go1.17.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1214 19:05:58.847500    2061 out.go:297] Setting OutFile to fd 1 ...
	I1214 19:05:58.847620    2061 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1214 19:05:58.847624    2061 out.go:310] Setting ErrFile to fd 2...
	I1214 19:05:58.847628    2061 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1214 19:05:58.847693    2061 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/bin
	W1214 19:05:58.847778    2061 root.go:293] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/config/config.json: no such file or directory
	I1214 19:05:58.847937    2061 out.go:304] Setting JSON to true
	I1214 19:05:58.872181    2061 start.go:112] hostinfo: {"hostname":"37309.local","uptime":334,"bootTime":1639537224,"procs":324,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1214 19:05:58.872286    2061 start.go:120] gopshost.Virtualization returned error: not implemented yet
	W1214 19:05:58.901274    2061 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/preloaded-tarball: no such file or directory
	I1214 19:05:58.901311    2061 notify.go:174] Checking for updates...
	I1214 19:05:58.928964    2061 config.go:176] Loaded profile config "download-only-20211214190531-1964": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1214 19:05:58.929026    2061 start.go:703] api.Load failed for download-only-20211214190531-1964: filestore "download-only-20211214190531-1964": Docker machine "download-only-20211214190531-1964" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1214 19:05:58.929074    2061 driver.go:344] Setting default libvirt URI to qemu:///system
	W1214 19:05:58.929099    2061 start.go:703] api.Load failed for download-only-20211214190531-1964: filestore "download-only-20211214190531-1964": Docker machine "download-only-20211214190531-1964" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1214 19:05:59.023294    2061 docker.go:132] docker version: linux-20.10.6
	I1214 19:05:59.023429    2061 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1214 19:05:59.197802    2061 info.go:263] docker info: {ID:5AO3:Q7BV:QPO2:IORE:2FWE:BSI4:OSEF:34WA:NLU4:XM3Q:JID7:HR3K Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-15 03:05:59.13655847 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAd
dress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secco
mp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1214 19:05:59.224904    2061 start.go:280] selected driver: docker
	I1214 19:05:59.224928    2061 start.go:795] validating driver "docker" against &{Name:download-only-20211214190531-1964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20211214190531-1964 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1214 19:05:59.225369    2061 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1214 19:05:59.399900    2061 info.go:263] docker info: {ID:5AO3:Q7BV:QPO2:IORE:2FWE:BSI4:OSEF:34WA:NLU4:XM3Q:JID7:HR3K Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:44 SystemTime:2021-12-15 03:05:59.339789602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1214 19:05:59.401855    2061 cni.go:93] Creating CNI manager for ""
	I1214 19:05:59.401873    2061 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1214 19:05:59.401889    2061 start_flags.go:298] config:
	{Name:download-only-20211214190531-1964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4 ClusterName:download-only-20211214190531-1964 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1214 19:05:59.429369    2061 cache.go:118] Beginning downloading kic base image for docker with docker
	I1214 19:05:59.462415    2061 preload.go:132] Checking if preload exists for k8s version v1.22.4 and runtime docker
	I1214 19:05:59.462442    2061 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab in local docker daemon
	I1214 19:05:59.537351    2061 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v16-v1.22.4-docker-overlay2-amd64.tar.lz4
	I1214 19:05:59.537373    2061 cache.go:57] Caching tarball of preloaded images
	I1214 19:05:59.537572    2061 preload.go:132] Checking if preload exists for k8s version v1.22.4 and runtime docker
	I1214 19:05:59.563322    2061 preload.go:238] getting checksum for preloaded-images-k8s-v16-v1.22.4-docker-overlay2-amd64.tar.lz4 ...
	I1214 19:05:59.574631    2061 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab in local docker daemon, skipping pull
	I1214 19:05:59.574648    2061 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab exists in daemon, skipping load
	I1214 19:05:59.659286    2061 download.go:100] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v16-v1.22.4-docker-overlay2-amd64.tar.lz4?checksum=md5:15731760796c2bc45c1bff6eaa86d8fa -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.22.4-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20211214190531-1964"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.4/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.0-rc.1/json-events (8.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.0-rc.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20211214190531-1964 --force --alsologtostderr --kubernetes-version=v1.23.0-rc.1 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20211214190531-1964 --force --alsologtostderr --kubernetes-version=v1.23.0-rc.1 --container-runtime=docker --driver=docker : (8.204885098s)
--- PASS: TestDownloadOnly/v1.23.0-rc.1/json-events (8.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.0-rc.1/preload-exists
--- PASS: TestDownloadOnly/v1.23.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.0-rc.1/kubectl
--- PASS: TestDownloadOnly/v1.23.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.0-rc.1/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.0-rc.1/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20211214190531-1964
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20211214190531-1964: exit status 85 (275.989648ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/12/14 19:06:07
	Running on machine: 37309
	Binary: Built with gc go1.17.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1214 19:06:07.261307    2090 out.go:297] Setting OutFile to fd 1 ...
	I1214 19:06:07.261431    2090 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1214 19:06:07.261435    2090 out.go:310] Setting ErrFile to fd 2...
	I1214 19:06:07.261439    2090 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1214 19:06:07.261508    2090 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/bin
	W1214 19:06:07.261587    2090 root.go:293] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/config/config.json: no such file or directory
	I1214 19:06:07.261745    2090 out.go:304] Setting JSON to true
	I1214 19:06:07.285839    2090 start.go:112] hostinfo: {"hostname":"37309.local","uptime":343,"bootTime":1639537224,"procs":322,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1214 19:06:07.285980    2090 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1214 19:06:07.313371    2090 notify.go:174] Checking for updates...
	I1214 19:06:07.339852    2090 config.go:176] Loaded profile config "download-only-20211214190531-1964": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4
	W1214 19:06:07.339910    2090 start.go:703] api.Load failed for download-only-20211214190531-1964: filestore "download-only-20211214190531-1964": Docker machine "download-only-20211214190531-1964" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1214 19:06:07.339972    2090 driver.go:344] Setting default libvirt URI to qemu:///system
	W1214 19:06:07.339998    2090 start.go:703] api.Load failed for download-only-20211214190531-1964: filestore "download-only-20211214190531-1964": Docker machine "download-only-20211214190531-1964" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1214 19:06:07.430839    2090 docker.go:132] docker version: linux-20.10.6
	I1214 19:06:07.430969    2090 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1214 19:06:07.604120    2090 info.go:263] docker info: {ID:5AO3:Q7BV:QPO2:IORE:2FWE:BSI4:OSEF:34WA:NLU4:XM3Q:JID7:HR3K Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:45 SystemTime:2021-12-15 03:06:07.549680232 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1214 19:06:07.631057    2090 start.go:280] selected driver: docker
	I1214 19:06:07.631080    2090 start.go:795] validating driver "docker" against &{Name:download-only-20211214190531-1964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4 ClusterName:download-only-20211214190531-1964 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.4 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1214 19:06:07.631552    2090 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1214 19:06:07.805071    2090 info.go:263] docker info: {ID:5AO3:Q7BV:QPO2:IORE:2FWE:BSI4:OSEF:34WA:NLU4:XM3Q:JID7:HR3K Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:45 SystemTime:2021-12-15 03:06:07.75086572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAd
dress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secco
mp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1214 19:06:07.807007    2090 cni.go:93] Creating CNI manager for ""
	I1214 19:06:07.807025    2090 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I1214 19:06:07.807035    2090 start_flags.go:298] config:
	{Name:download-only-20211214190531-1964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.0-rc.1 ClusterName:download-only-20211214190531-1964 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.4 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1214 19:06:07.834033    2090 cache.go:118] Beginning downloading kic base image for docker with docker
	I1214 19:06:07.859554    2090 preload.go:132] Checking if preload exists for k8s version v1.23.0-rc.1 and runtime docker
	I1214 19:06:07.859622    2090 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab in local docker daemon
	I1214 19:06:07.935825    2090 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v16-v1.23.0-rc.1-docker-overlay2-amd64.tar.lz4
	I1214 19:06:07.935849    2090 cache.go:57] Caching tarball of preloaded images
	I1214 19:06:07.936063    2090 preload.go:132] Checking if preload exists for k8s version v1.23.0-rc.1 and runtime docker
	I1214 19:06:07.961421    2090 preload.go:238] getting checksum for preloaded-images-k8s-v16-v1.23.0-rc.1-docker-overlay2-amd64.tar.lz4 ...
	I1214 19:06:07.970090    2090 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab in local docker daemon, skipping pull
	I1214 19:06:07.970104    2090 cache.go:140] gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab exists in daemon, skipping load
	I1214 19:06:08.059223    2090 download.go:100] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v16-v1.23.0-rc.1-docker-overlay2-amd64.tar.lz4?checksum=md5:94958a033570f22b2a2cae74ddb6276f -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.0-rc.1-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20211214190531-1964"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.0-rc.1/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (1.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 delete --all
aaa_download_only_test.go:189: (dbg) Done: out/minikube-darwin-amd64 delete --all: (1.120787273s)
--- PASS: TestDownloadOnly/DeleteAll (1.12s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.63s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-20211214190531-1964
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.63s)

                                                
                                    
x
+
TestDownloadOnlyKic (10.96s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-20211214190618-1964 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:226: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-20211214190618-1964 --force --alsologtostderr --driver=docker : (9.430327402s)
helpers_test.go:176: Cleaning up "download-docker-20211214190618-1964" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-20211214190618-1964
--- PASS: TestDownloadOnlyKic (10.96s)

                                                
                                    
x
+
TestOffline (116.51s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-20211214195817-1964 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-20211214195817-1964 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (1m42.034904197s)
helpers_test.go:176: Cleaning up "offline-docker-20211214195817-1964" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-20211214195817-1964
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-20211214195817-1964: (14.474378691s)
--- PASS: TestOffline (116.51s)

                                                
                                    
x
+
TestAddons/Setup (188.66s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-20211214190629-1964 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-darwin-amd64 start -p addons-20211214190629-1964 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m8.664391866s)
--- PASS: TestAddons/Setup (188.66s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.78s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:358: metrics-server stabilized in 2.198828ms
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:343: "metrics-server-77c99ccb96-hzljz" [7bf31960-b7be-4ab7-878d-a5b464132eba] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006866553s
addons_test.go:366: (dbg) Run:  kubectl --context addons-20211214190629-1964 top pods -n kube-system
addons_test.go:383: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20211214190629-1964 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.78s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.12s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:407: tiller-deploy stabilized in 12.072771ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:343: "tiller-deploy-64b546c44b-wbptq" [40832f54-05dc-4dbd-92b9-bd8fbd1d14e6] Running
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.020393259s

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:424: (dbg) Run:  kubectl --context addons-20211214190629-1964 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:424: (dbg) Done: kubectl --context addons-20211214190629-1964 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.480843954s)
addons_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20211214190629-1964 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.12s)

                                                
                                    
x
+
TestAddons/parallel/Olm (40.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:453: (dbg) Run:  kubectl --context addons-20211214190629-1964 wait --for=condition=ready --namespace=olm pod --selector=app=catalog-operator --timeout=90s
addons_test.go:456: catalog-operator stabilized in 59.927285ms
addons_test.go:458: (dbg) Run:  kubectl --context addons-20211214190629-1964 wait --for=condition=ready --namespace=olm pod --selector=app=olm-operator --timeout=90s
addons_test.go:461: olm-operator stabilized in 118.195837ms
addons_test.go:463: (dbg) Run:  kubectl --context addons-20211214190629-1964 wait --for=condition=ready --namespace=olm pod --selector=app=packageserver --timeout=90s
addons_test.go:466: packageserver stabilized in 178.767026ms
addons_test.go:468: (dbg) Run:  kubectl --context addons-20211214190629-1964 wait --for=condition=ready --namespace=olm pod --selector=olm.catalogSource=operatorhubio-catalog --timeout=90s
addons_test.go:471: operatorhubio-catalog stabilized in 235.351647ms
addons_test.go:474: (dbg) Run:  kubectl --context addons-20211214190629-1964 create -f testdata/etcd.yaml
addons_test.go:481: (dbg) Run:  kubectl --context addons-20211214190629-1964 get csv -n my-etcd
addons_test.go:486: kubectl --context addons-20211214190629-1964 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.
addons_test.go:481: (dbg) Run:  kubectl --context addons-20211214190629-1964 get csv -n my-etcd
addons_test.go:486: kubectl --context addons-20211214190629-1964 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:481: (dbg) Run:  kubectl --context addons-20211214190629-1964 get csv -n my-etcd
addons_test.go:486: kubectl --context addons-20211214190629-1964 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:481: (dbg) Run:  kubectl --context addons-20211214190629-1964 get csv -n my-etcd
addons_test.go:486: kubectl --context addons-20211214190629-1964 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:481: (dbg) Run:  kubectl --context addons-20211214190629-1964 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:481: (dbg) Run:  kubectl --context addons-20211214190629-1964 get csv -n my-etcd
--- PASS: TestAddons/parallel/Olm (40.74s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.34s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:512: csi-hostpath-driver pods stabilized in 4.915558ms
addons_test.go:515: (dbg) Run:  kubectl --context addons-20211214190629-1964 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:520: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20211214190629-1964 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:525: (dbg) Run:  kubectl --context addons-20211214190629-1964 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:530: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [aab51390-5f2e-47d4-a2c7-5188dfc10a6e] Pending
helpers_test.go:343: "task-pv-pod" [aab51390-5f2e-47d4-a2c7-5188dfc10a6e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [aab51390-5f2e-47d4-a2c7-5188dfc10a6e] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:530: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.007854876s
addons_test.go:535: (dbg) Run:  kubectl --context addons-20211214190629-1964 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:540: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20211214190629-1964 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20211214190629-1964 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:545: (dbg) Run:  kubectl --context addons-20211214190629-1964 delete pod task-pv-pod

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:551: (dbg) Run:  kubectl --context addons-20211214190629-1964 delete pvc hpvc
addons_test.go:557: (dbg) Run:  kubectl --context addons-20211214190629-1964 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:562: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20211214190629-1964 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:567: (dbg) Run:  kubectl --context addons-20211214190629-1964 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:572: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [30b8ab12-fec4-48d2-8f27-85ae7cfcafdd] Pending
helpers_test.go:343: "task-pv-pod-restore" [30b8ab12-fec4-48d2-8f27-85ae7cfcafdd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod-restore" [30b8ab12-fec4-48d2-8f27-85ae7cfcafdd] Running
addons_test.go:572: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 17.009793527s
addons_test.go:577: (dbg) Run:  kubectl --context addons-20211214190629-1964 delete pod task-pv-pod-restore
addons_test.go:581: (dbg) Run:  kubectl --context addons-20211214190629-1964 delete pvc hpvc-restore
addons_test.go:585: (dbg) Run:  kubectl --context addons-20211214190629-1964 delete volumesnapshot new-snapshot-demo
addons_test.go:589: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20211214190629-1964 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:589: (dbg) Done: out/minikube-darwin-amd64 -p addons-20211214190629-1964 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.098432126s)
addons_test.go:593: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20211214190629-1964 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (44.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (18.42s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:604: (dbg) Run:  kubectl --context addons-20211214190629-1964 create -f testdata/busybox.yaml
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [b995dedc-f1b1-470c-8689-8067ef899eff] Pending
helpers_test.go:343: "busybox" [b995dedc-f1b1-470c-8689-8067ef899eff] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [b995dedc-f1b1-470c-8689-8067ef899eff] Running
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 10.006961458s
addons_test.go:616: (dbg) Run:  kubectl --context addons-20211214190629-1964 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:629: (dbg) Run:  kubectl --context addons-20211214190629-1964 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:653: (dbg) Run:  kubectl --context addons-20211214190629-1964 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:666: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20211214190629-1964 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:666: (dbg) Done: out/minikube-darwin-amd64 -p addons-20211214190629-1964 addons disable gcp-auth --alsologtostderr -v=1: (7.34251986s)
--- PASS: TestAddons/serial/GCPAuth (18.42s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.35s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-20211214190629-1964
addons_test.go:133: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-20211214190629-1964: (17.905384032s)
addons_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-20211214190629-1964
addons_test.go:141: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-20211214190629-1964
--- PASS: TestAddons/StoppedEnableDisable (18.35s)

                                                
                                    
x
+
TestCertOptions (102.51s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-20211214201038-1964 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
E1214 20:10:44.910534    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
E1214 20:11:29.062237    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
E1214 20:11:45.983237    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
E1214 20:11:51.195784    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/skaffold-20211214195511-1964/client.crt: no such file or directory
cert_options_test.go:50: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-20211214201038-1964 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (1m25.007589024s)
cert_options_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-20211214201038-1964 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-20211214201038-1964 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-20211214201038-1964" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-20211214201038-1964
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-20211214201038-1964: (16.122749578s)
--- PASS: TestCertOptions (102.51s)

                                                
                                    
x
+
TestCertExpiration (292.51s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20211214200832-1964 --memory=2048 --cert-expiration=3m --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20211214200832-1964 --memory=2048 --cert-expiration=3m --driver=docker : (48.031541245s)
E1214 20:09:37.761781    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20211214200832-1964 --memory=2048 --cert-expiration=8760h --driver=docker 
cert_options_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20211214200832-1964 --memory=2048 --cert-expiration=8760h --driver=docker : (54.21720076s)
helpers_test.go:176: Cleaning up "cert-expiration-20211214200832-1964" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-20211214200832-1964
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-20211214200832-1964: (10.256548071s)
--- PASS: TestCertExpiration (292.51s)

                                                
                                    
x
+
TestDockerFlags (82.62s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-20211214200915-1964 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-20211214200915-1964 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (1m4.851505146s)
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20211214200915-1964 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:62: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20211214200915-1964 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-20211214200915-1964" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-20211214200915-1964
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-20211214200915-1964: (16.456004209s)
--- PASS: TestDockerFlags (82.62s)

                                                
                                    
x
+
TestForceSystemdFlag (70.32s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-20211214200629-1964 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
E1214 20:06:45.975185    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
E1214 20:06:51.177675    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/skaffold-20211214195511-1964/client.crt: no such file or directory

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-20211214200629-1964 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (1m3.186917687s)
docker_test.go:105: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-20211214200629-1964 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-flag-20211214200629-1964" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-20211214200629-1964
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-20211214200629-1964: (6.441658108s)
--- PASS: TestForceSystemdFlag (70.32s)

                                                
                                    
x
+
TestForceSystemdEnv (68.21s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-20211214200807-1964 --memory=2048 --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-20211214200807-1964 --memory=2048 --alsologtostderr -v=5 --driver=docker : (56.769660327s)
docker_test.go:105: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-20211214200807-1964 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-env-20211214200807-1964" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-20211214200807-1964
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-20211214200807-1964: (10.647053903s)
--- PASS: TestForceSystemdEnv (68.21s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.56s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.56s)

                                                
                                    
x
+
TestErrorSpam/setup (72.64s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-20211214191129-1964 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 --driver=docker 
error_spam_test.go:79: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-20211214191129-1964 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 --driver=docker : (1m12.639506948s)
error_spam_test.go:89: acceptable stderr: "! /usr/local/bin/kubectl is version 1.19.7, which may have incompatibilites with Kubernetes 1.22.4."
--- PASS: TestErrorSpam/setup (72.64s)

                                                
                                    
x
+
TestErrorSpam/start (2.29s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:214: Cleaning up 1 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211214191129-1964 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 start --dry-run
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211214191129-1964 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 start --dry-run
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211214191129-1964 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 start --dry-run
--- PASS: TestErrorSpam/start (2.29s)

                                                
                                    
x
+
TestErrorSpam/status (1.92s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211214191129-1964 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 status
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211214191129-1964 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 status
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211214191129-1964 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 status
--- PASS: TestErrorSpam/status (1.92s)

                                                
                                    
x
+
TestErrorSpam/pause (2.14s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211214191129-1964 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 pause
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211214191129-1964 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 pause
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211214191129-1964 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 pause
--- PASS: TestErrorSpam/pause (2.14s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211214191129-1964 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 unpause
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211214191129-1964 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 unpause
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211214191129-1964 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 unpause
--- PASS: TestErrorSpam/unpause (2.12s)

                                                
                                    
x
+
TestErrorSpam/stop (18.12s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211214191129-1964 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 stop
error_spam_test.go:157: (dbg) Done: out/minikube-darwin-amd64 -p nospam-20211214191129-1964 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 stop: (17.373247269s)
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211214191129-1964 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 stop
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20211214191129-1964 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20211214191129-1964 stop
--- PASS: TestErrorSpam/stop (18.12s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1685: local sync path: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/files/etc/test/nested/copy/1964/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (123.95s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2067: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20211214191315-1964 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
E1214 19:14:37.715496    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
E1214 19:14:37.721203    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
E1214 19:14:37.731633    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
E1214 19:14:37.751827    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
E1214 19:14:37.791979    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
E1214 19:14:37.872660    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
E1214 19:14:38.034972    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
E1214 19:14:38.356237    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
E1214 19:14:38.997094    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
E1214 19:14:40.277319    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
E1214 19:14:42.840431    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
E1214 19:14:47.968090    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
E1214 19:14:58.213302    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
E1214 19:15:18.698523    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
functional_test.go:2067: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20211214191315-1964 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (2m3.947225533s)
--- PASS: TestFunctional/serial/StartWithProxy (123.95s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.51s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:637: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20211214191315-1964 --alsologtostderr -v=8
functional_test.go:637: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20211214191315-1964 --alsologtostderr -v=8: (7.510407137s)
functional_test.go:641: soft start took 7.510999744s for "functional-20211214191315-1964" cluster.
--- PASS: TestFunctional/serial/SoftStart (7.51s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:659: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:674: (dbg) Run:  kubectl --context functional-20211214191315-1964 get po -A
functional_test.go:674: (dbg) Done: kubectl --context functional-20211214191315-1964 get po -A: (1.632376288s)
--- PASS: TestFunctional/serial/KubectlGetPods (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (9.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 cache add k8s.gcr.io/pause:3.1
functional_test.go:1020: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211214191315-1964 cache add k8s.gcr.io/pause:3.1: (2.258619573s)
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 cache add k8s.gcr.io/pause:3.3
functional_test.go:1020: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211214191315-1964 cache add k8s.gcr.io/pause:3.3: (4.007483338s)
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 cache add k8s.gcr.io/pause:latest
functional_test.go:1020: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211214191315-1964 cache add k8s.gcr.io/pause:latest: (3.220304806s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (9.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1051: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20211214191315-1964 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/functional-20211214191315-19644144101052
functional_test.go:1063: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 cache add minikube-local-cache-test:functional-20211214191315-1964
functional_test.go:1063: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211214191315-1964 cache add minikube-local-cache-test:functional-20211214191315-1964: (1.494906417s)
functional_test.go:1068: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 cache delete minikube-local-cache-test:functional-20211214191315-1964
functional_test.go:1057: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20211214191315-1964
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1076: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1084: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (3.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1121: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1127: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1127: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (616.75233ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1132: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 cache reload
functional_test.go:1132: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211214191315-1964 cache reload: (2.077879668s)
functional_test.go:1137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (3.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1146: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1146: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:694: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 kubectl -- --context functional-20211214191315-1964 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.47s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:719: (dbg) Run:  out/kubectl --context functional-20211214191315-1964 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.56s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1210: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 logs
functional_test.go:1210: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211214191315-1964 logs: (3.229200544s)
--- PASS: TestFunctional/serial/LogsCmd (3.23s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.11s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1227: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/functional-20211214191315-1964766636843/logs.txt
functional_test.go:1227: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211214191315-1964 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/functional-20211214191315-1964766636843/logs.txt: (3.112706279s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.11s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1173: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1173: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 config get cpus
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211214191315-1964 config get cpus: exit status 14 (44.022198ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 config set cpus 2
functional_test.go:1173: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 config get cpus
functional_test.go:1173: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 config unset cpus
functional_test.go:1173: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 config get cpus
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211214191315-1964 config get cpus: exit status 14 (42.615048ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (3.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:884: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20211214191315-1964 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:889: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20211214191315-1964 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to kill pid 4916: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (3.06s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:949: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20211214191315-1964 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:949: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20211214191315-1964 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (695.599596ms)

                                                
                                                
-- stdout --
	* [functional-20211214191315-1964] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13173
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1214 19:17:46.414009    4837 out.go:297] Setting OutFile to fd 1 ...
	I1214 19:17:46.414182    4837 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1214 19:17:46.414186    4837 out.go:310] Setting ErrFile to fd 2...
	I1214 19:17:46.414190    4837 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1214 19:17:46.414285    4837 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/bin
	I1214 19:17:46.414563    4837 out.go:304] Setting JSON to false
	I1214 19:17:46.439434    4837 start.go:112] hostinfo: {"hostname":"37309.local","uptime":1042,"bootTime":1639537224,"procs":317,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1214 19:17:46.439538    4837 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1214 19:17:46.504485    4837 out.go:176] * [functional-20211214191315-1964] minikube v1.24.0 on Darwin 11.2.3
	I1214 19:17:46.530362    4837 out.go:176]   - MINIKUBE_LOCATION=13173
	I1214 19:17:46.556331    4837 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig
	I1214 19:17:46.582334    4837 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1214 19:17:46.608326    4837 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube
	I1214 19:17:46.608737    4837 config.go:176] Loaded profile config "functional-20211214191315-1964": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4
	I1214 19:17:46.609064    4837 driver.go:344] Setting default libvirt URI to qemu:///system
	I1214 19:17:46.710576    4837 docker.go:132] docker version: linux-20.10.6
	I1214 19:17:46.710710    4837 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1214 19:17:46.906747    4837 info.go:263] docker info: {ID:5AO3:Q7BV:QPO2:IORE:2FWE:BSI4:OSEF:34WA:NLU4:XM3Q:JID7:HR3K Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:51 SystemTime:2021-12-15 03:17:46.828797514 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1214 19:17:46.954122    4837 out.go:176] * Using the docker driver based on existing profile
	I1214 19:17:46.954152    4837 start.go:280] selected driver: docker
	I1214 19:17:46.954159    4837 start.go:795] validating driver "docker" against &{Name:functional-20211214191315-1964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4 ClusterName:functional-20211214191315-1964 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.4 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provis
ioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1214 19:17:46.954277    4837 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1214 19:17:46.981997    4837 out.go:176] 
	W1214 19:17:46.982170    4837 out.go:241] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1214 19:17:47.008165    4837 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:966: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20211214191315-1964 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:991: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20211214191315-1964 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:991: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20211214191315-1964 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (622.528483ms)

                                                
                                                
-- stdout --
	* [functional-20211214191315-1964] minikube v1.24.0 sur Darwin 11.2.3
	  - MINIKUBE_LOCATION=13173
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1214 19:17:43.211622    4683 out.go:297] Setting OutFile to fd 1 ...
	I1214 19:17:43.211750    4683 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1214 19:17:43.211755    4683 out.go:310] Setting ErrFile to fd 2...
	I1214 19:17:43.211758    4683 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1214 19:17:43.211866    4683 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/bin
	I1214 19:17:43.212130    4683 out.go:304] Setting JSON to false
	I1214 19:17:43.237856    4683 start.go:112] hostinfo: {"hostname":"37309.local","uptime":1039,"bootTime":1639537224,"procs":318,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1214 19:17:43.237954    4683 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I1214 19:17:43.265036    4683 out.go:176] * [functional-20211214191315-1964] minikube v1.24.0 sur Darwin 11.2.3
	I1214 19:17:43.311566    4683 out.go:176]   - MINIKUBE_LOCATION=13173
	I1214 19:17:43.337492    4683 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig
	I1214 19:17:43.363318    4683 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1214 19:17:43.389416    4683 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube
	I1214 19:17:43.389880    4683 config.go:176] Loaded profile config "functional-20211214191315-1964": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4
	I1214 19:17:43.390205    4683 driver.go:344] Setting default libvirt URI to qemu:///system
	I1214 19:17:43.492984    4683 docker.go:132] docker version: linux-20.10.6
	I1214 19:17:43.493119    4683 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I1214 19:17:43.685869    4683 info.go:263] docker info: {ID:5AO3:Q7BV:QPO2:IORE:2FWE:BSI4:OSEF:34WA:NLU4:XM3Q:JID7:HR3K Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:51 SystemTime:2021-12-15 03:17:43.627980455 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I1214 19:17:43.712517    4683 out.go:176] * Utilisation du pilote docker basé sur le profil existant
	I1214 19:17:43.712553    4683 start.go:280] selected driver: docker
	I1214 19:17:43.712560    4683 start.go:795] validating driver "docker" against &{Name:functional-20211214191315-1964 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.28-1638824847-13104@sha256:a90edc66cae8cca35685dce007b915405a2ba91d903f99f7d8f79cd9d1faabab Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.4 ClusterName:functional-20211214191315-1964 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.22.4 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provis
ioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker}
	I1214 19:17:43.712655    4683 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I1214 19:17:43.740115    4683 out.go:176] 
	W1214 19:17:43.740253    4683 out.go:241] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1214 19:17:43.787213    4683 out.go:176] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:833: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:839: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:851: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1519: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 addons list
functional_test.go:1531: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [fb70bc64-478f-4146-9565-9aa0691bc521] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007071964s
functional_test_pvc_test.go:50: (dbg) Run:  kubectl --context functional-20211214191315-1964 get storageclass -o=json
functional_test_pvc_test.go:70: (dbg) Run:  kubectl --context functional-20211214191315-1964 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20211214191315-1964 get pvc myclaim -o=json
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20211214191315-1964 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [1b4e6358-15f0-4f5e-bca7-96b3b8ffbbf5] Pending
helpers_test.go:343: "sp-pod" [1b4e6358-15f0-4f5e-bca7-96b3b8ffbbf5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1214 19:17:21.603876    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
helpers_test.go:343: "sp-pod" [1b4e6358-15f0-4f5e-bca7-96b3b8ffbbf5] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.008108264s
functional_test_pvc_test.go:101: (dbg) Run:  kubectl --context functional-20211214191315-1964 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:107: (dbg) Run:  kubectl --context functional-20211214191315-1964 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20211214191315-1964 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [cdfae30f-d3ff-4687-b630-206eb83b0406] Pending
helpers_test.go:343: "sp-pod" [cdfae30f-d3ff-4687-b630-206eb83b0406] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [cdfae30f-d3ff-4687-b630-206eb83b0406] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.010822014s
functional_test_pvc_test.go:115: (dbg) Run:  kubectl --context functional-20211214191315-1964 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.79s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1571: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh -n functional-20211214191315-1964 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 cp functional-20211214191315-1964:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mk_test1041915919/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh -n functional-20211214191315-1964 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1623: (dbg) Run:  kubectl --context functional-20211214191315-1964 replace --force -f testdata/mysql.yaml
functional_test.go:1629: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:343: "mysql-9bbbc5bbb-8t2d7" [4b1c580f-38eb-4461-a441-ea852abd2f8b] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-8t2d7" [4b1c580f-38eb-4461-a441-ea852abd2f8b] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-8t2d7" [4b1c580f-38eb-4461-a441-ea852abd2f8b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1629: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.023793289s
functional_test.go:1637: (dbg) Run:  kubectl --context functional-20211214191315-1964 exec mysql-9bbbc5bbb-8t2d7 -- mysql -ppassword -e "show databases;"
functional_test.go:1637: (dbg) Non-zero exit: kubectl --context functional-20211214191315-1964 exec mysql-9bbbc5bbb-8t2d7 -- mysql -ppassword -e "show databases;": exit status 1 (173.799582ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1637: (dbg) Run:  kubectl --context functional-20211214191315-1964 exec mysql-9bbbc5bbb-8t2d7 -- mysql -ppassword -e "show databases;"
functional_test.go:1637: (dbg) Non-zero exit: kubectl --context functional-20211214191315-1964 exec mysql-9bbbc5bbb-8t2d7 -- mysql -ppassword -e "show databases;": exit status 1 (131.867462ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1637: (dbg) Run:  kubectl --context functional-20211214191315-1964 exec mysql-9bbbc5bbb-8t2d7 -- mysql -ppassword -e "show databases;"
functional_test.go:1637: (dbg) Non-zero exit: kubectl --context functional-20211214191315-1964 exec mysql-9bbbc5bbb-8t2d7 -- mysql -ppassword -e "show databases;": exit status 1 (140.065492ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1637: (dbg) Run:  kubectl --context functional-20211214191315-1964 exec mysql-9bbbc5bbb-8t2d7 -- mysql -ppassword -e "show databases;"
functional_test.go:1637: (dbg) Non-zero exit: kubectl --context functional-20211214191315-1964 exec mysql-9bbbc5bbb-8t2d7 -- mysql -ppassword -e "show databases;": exit status 1 (153.315203ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1637: (dbg) Run:  kubectl --context functional-20211214191315-1964 exec mysql-9bbbc5bbb-8t2d7 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1759: Checking for existence of /etc/test/nested/copy/1964/hosts within VM
functional_test.go:1761: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh "sudo cat /etc/test/nested/copy/1964/hosts"
functional_test.go:1766: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (3.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1802: Checking for existence of /etc/ssl/certs/1964.pem within VM
functional_test.go:1803: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh "sudo cat /etc/ssl/certs/1964.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1802: Checking for existence of /usr/share/ca-certificates/1964.pem within VM
functional_test.go:1803: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh "sudo cat /usr/share/ca-certificates/1964.pem"
functional_test.go:1802: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1803: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1829: Checking for existence of /etc/ssl/certs/19642.pem within VM
functional_test.go:1830: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh "sudo cat /etc/ssl/certs/19642.pem"
functional_test.go:1829: Checking for existence of /usr/share/ca-certificates/19642.pem within VM
functional_test.go:1830: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh "sudo cat /usr/share/ca-certificates/19642.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1829: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1830: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (3.99s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20211214191315-1964 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1857: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1857: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh "sudo systemctl is-active crio": exit status 1 (659.617116ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2089: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2103: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2103: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211214191315-1964 version -o=json --components: (1.233707661s)
--- PASS: TestFunctional/parallel/Version/components (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageList (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageList
=== PAUSE TestFunctional/parallel/ImageCommands/ImageList

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageList
functional_test.go:263: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 image ls
functional_test.go:268: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20211214191315-1964 image ls:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.5
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.22.4
k8s.gcr.io/kube-proxy:v1.22.4
k8s.gcr.io/kube-controller-manager:v1.22.4
k8s.gcr.io/kube-apiserver:v1.22.4
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.4
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-20211214191315-1964
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20211214191315-1964
docker.io/library/busybox:1.28.4-glibc
docker.io/kubernetesui/metrics-scraper:v1.0.7
docker.io/kubernetesui/dashboard:v2.3.1
--- PASS: TestFunctional/parallel/ImageCommands/ImageList (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:286: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:286: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh pgrep buildkitd: exit status 1 (869.403347ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 image build -t localhost/my-image:functional-20211214191315-1964 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:293: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211214191315-1964 image build -t localhost/my-image:functional-20211214191315-1964 testdata/build: (2.963042797s)
functional_test.go:298: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20211214191315-1964 image build -t localhost/my-image:functional-20211214191315-1964 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM busybox
latest: Pulling from library/busybox
3cb635b06aa2: Pulling fs layer
3cb635b06aa2: Verifying Checksum
3cb635b06aa2: Download complete
3cb635b06aa2: Pull complete
Digest: sha256:b5cfd4befc119a590ca1a81d6bb0fa1fb19f1fbebd0397f25fae164abe1e8a6a
Status: Downloaded newer image for busybox:latest
---> ffe9d497c324
Step 2/3 : RUN true
---> Running in ab372921bc6f
Removing intermediate container ab372921bc6f
---> eec416cecb3b
Step 3/3 : ADD content.txt /
---> 1169102ee946
Successfully built 1169102ee946
Successfully tagged localhost/my-image:functional-20211214191315-1964
functional_test.go:426: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:320: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:320: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.080874841s)
functional_test.go:325: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20211214191315-1964
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.20s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:477: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20211214191315-1964 docker-env) && out/minikube-darwin-amd64 status -p functional-20211214191315-1964"
functional_test.go:477: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20211214191315-1964 docker-env) && out/minikube-darwin-amd64 status -p functional-20211214191315-1964": (1.499859709s)
functional_test.go:500: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20211214191315-1964 docker-env) && docker images"
functional_test.go:500: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20211214191315-1964 docker-env) && docker images": (1.057620374s)
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1949: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1949: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1949: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:333: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 image load --daemon gcr.io/google-containers/addon-resizer:functional-20211214191315-1964

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:333: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211214191315-1964 image load --daemon gcr.io/google-containers/addon-resizer:functional-20211214191315-1964: (3.169301591s)
functional_test.go:426: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:343: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 image load --daemon gcr.io/google-containers/addon-resizer:functional-20211214191315-1964

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:343: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211214191315-1964 image load --daemon gcr.io/google-containers/addon-resizer:functional-20211214191315-1964: (2.316538831s)
functional_test.go:426: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.823874478s)
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20211214191315-1964
functional_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 image load --daemon gcr.io/google-containers/addon-resizer:functional-20211214191315-1964
functional_test.go:240: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211214191315-1964 image load --daemon gcr.io/google-containers/addon-resizer:functional-20211214191315-1964: (4.315700058s)
functional_test.go:426: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 image save gcr.io/google-containers/addon-resizer:functional-20211214191315-1964 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:358: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211214191315-1964 image save gcr.io/google-containers/addon-resizer:functional-20211214191315-1964 /Users/jenkins/workspace/addon-resizer-save.tar: (1.911346499s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 image rm gcr.io/google-containers/addon-resizer:functional-20211214191315-1964
functional_test.go:426: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:387: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:387: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211214191315-1964 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.843834775s)
functional_test.go:426: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:397: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20211214191315-1964
functional_test.go:402: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 image save --daemon gcr.io/google-containers/addon-resizer:functional-20211214191315-1964
functional_test.go:402: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211214191315-1964 image save --daemon gcr.io/google-containers/addon-resizer:functional-20211214191315-1964: (2.543558117s)
functional_test.go:407: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20211214191315-1964
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.79s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1255: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1290: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1295: Took "675.600394ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1304: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1309: Took "82.930129ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1341: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1346: Took "752.514606ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1354: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1359: Took "112.740246ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-20211214191315-1964 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20211214191315-1964 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:343: "nginx-svc" [c1298c8e-1dba-41bd-a0dc-d37750576aee] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:343: "nginx-svc" [c1298c8e-1dba-41bd-a0dc-d37750576aee] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.007367544s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20211214191315-1964 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (12.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (12.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-20211214191315-1964 tunnel --alsologtostderr] ...
helpers_test.go:501: unable to terminate pid 4647: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20211214191315-1964 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mounttest1718431229:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1639538263790759000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mounttest1718431229/created-by-test
functional_test_mount_test.go:110: wrote "test-1639538263790759000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mounttest1718431229/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1639538263790759000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mounttest1718431229/test-1639538263790759000
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (681.158402ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 15 03:17 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 15 03:17 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 15 03:17 test-1639538263790759000
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh cat /mount-9p/test-1639538263790759000

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20211214191315-1964 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:343: "busybox-mount" [c8644a9f-9318-42c9-b824-5015b32fe360] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [c8644a9f-9318-42c9-b824-5015b32fe360] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
2021/12/14 19:17:50 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [c8644a9f-9318-42c9-b824-5015b32fe360] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.007868574s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20211214191315-1964 logs busybox-mount
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20211214191315-1964 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mounttest1718431229:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (3.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20211214191315-1964 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mounttest2604104409:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (724.995509ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh -- ls -la /mount-9p
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20211214191315-1964 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mounttest2604104409:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh "sudo umount -f /mount-9p": exit status 1 (599.040386ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-darwin-amd64 -p functional-20211214191315-1964 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20211214191315-1964 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mounttest2604104409:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (3.54s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.39s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20211214191315-1964
--- PASS: TestFunctional/delete_addon-resizer_images (0.39s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20211214191315-1964
--- PASS: TestFunctional/delete_my-image_image (0.12s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20211214191315-1964
--- PASS: TestFunctional/delete_minikube_cached_images (0.12s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (134.82s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:40: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-20211214191813-1964 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E1214 19:19:37.726306    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
E1214 19:20:05.452834    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
ingress_addon_legacy_test.go:40: (dbg) Done: out/minikube-darwin-amd64 start -p ingress-addon-legacy-20211214191813-1964 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : (2m14.824751233s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (134.82s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.35s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20211214191813-1964 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:71: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-20211214191813-1964 addons enable ingress --alsologtostderr -v=5: (16.354695466s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.35s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.62s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20211214191813-1964 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.62s)

                                                
                                    
x
+
TestJSONOutput/start/Command (125.54s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-20211214192132-1964 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E1214 19:21:45.952731    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
E1214 19:21:45.958504    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
E1214 19:21:45.969894    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
E1214 19:21:45.994606    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
E1214 19:21:46.035895    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
E1214 19:21:46.120585    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
E1214 19:21:46.284555    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
E1214 19:21:46.605468    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
E1214 19:21:47.247016    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
E1214 19:21:48.528064    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
E1214 19:21:51.096201    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
E1214 19:21:56.221150    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
E1214 19:22:06.461740    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
E1214 19:22:26.941694    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
E1214 19:23:07.902185    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-20211214192132-1964 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (2m5.544370843s)
--- PASS: TestJSONOutput/start/Command (125.54s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.95s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-20211214192132-1964 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.95s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-20211214192132-1964 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (18.05s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-20211214192132-1964 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-20211214192132-1964 --output=json --user=testUser: (18.052492048s)
--- PASS: TestJSONOutput/stop/Command (18.05s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.76s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-20211214192403-1964 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-20211214192403-1964 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (119.096173ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3e4a80c5-2a64-4ded-b4ef-f8e728797904","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20211214192403-1964] minikube v1.24.0 on Darwin 11.2.3","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2d9573b7-7cdf-4493-9333-0f99aa0dd54a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13173"}}
	{"specversion":"1.0","id":"87f52d55-789a-4bf4-8b1c-f986bf61ed58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig"}}
	{"specversion":"1.0","id":"ebb299cf-46eb-42a4-afbc-621b0338b8b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"25285008-17db-494b-8e0d-fa1972030d8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube"}}
	{"specversion":"1.0","id":"64fe1e04-e4cc-4c06-8bdd-375e70aa83fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20211214192403-1964" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-20211214192403-1964
--- PASS: TestErrorJSONOutput (0.76s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (89.58s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20211214192404-1964 --network=
E1214 19:24:29.824589    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
E1214 19:24:37.722667    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
kic_custom_network_test.go:58: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20211214192404-1964 --network=: (1m16.589812268s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20211214192404-1964" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20211214192404-1964
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20211214192404-1964: (12.811184524s)
--- PASS: TestKicCustomNetwork/create_custom_network (89.58s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (73.59s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20211214192534-1964 --network=bridge
E1214 19:25:44.881310    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
E1214 19:25:44.886981    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
E1214 19:25:44.897327    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
E1214 19:25:44.919560    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
E1214 19:25:44.964717    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
E1214 19:25:45.044899    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
E1214 19:25:45.211573    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
E1214 19:25:45.533672    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
E1214 19:25:46.176858    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
E1214 19:25:47.463925    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
E1214 19:25:50.024261    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
E1214 19:25:55.144776    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
E1214 19:26:05.387062    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
E1214 19:26:25.867119    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
kic_custom_network_test.go:58: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20211214192534-1964 --network=bridge: (1m4.122722271s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20211214192534-1964" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20211214192534-1964
E1214 19:26:45.949940    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20211214192534-1964: (9.350144791s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (73.59s)

                                                
                                    
x
+
TestKicExistingNetwork (87.42s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:94: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-20211214192653-1964 --network=existing-network
E1214 19:27:06.827486    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
E1214 19:27:13.666898    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
kic_custom_network_test.go:94: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-20211214192653-1964 --network=existing-network: (1m8.725298401s)
helpers_test.go:176: Cleaning up "existing-network-20211214192653-1964" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-20211214192653-1964
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-20211214192653-1964: (13.088165469s)
--- PASS: TestKicExistingNetwork (87.42s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (74.79s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-20211214192815-1964 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --driver=docker 
E1214 19:28:28.747686    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
mount_start_test.go:89: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-20211214192815-1964 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --driver=docker : (1m14.787327861s)
--- PASS: TestMountStart/serial/StartWithMountFirst (74.79s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (73.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20211214192815-1964 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --driver=docker 
E1214 19:29:37.720787    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
mount_start_test.go:89: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20211214192815-1964 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --driver=docker : (1m13.826178294s)
--- PASS: TestMountStart/serial/StartWithMountSecond (73.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.63s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:103: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-20211214192815-1964 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.63s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:103: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20211214192815-1964 ssh -- ls /minikube-host
E1214 19:30:44.873727    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
--- PASS: TestMountStart/serial/VerifyMountSecond (0.63s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (11.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-20211214192815-1964 --alsologtostderr -v=5
pause_test.go:133: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-20211214192815-1964 --alsologtostderr -v=5: (11.620053085s)
--- PASS: TestMountStart/serial/DeleteFirst (11.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.63s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:103: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20211214192815-1964 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.63s)

                                                
                                    
x
+
TestMountStart/serial/Stop (17.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:144: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-20211214192815-1964
E1214 19:31:00.815544    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
E1214 19:31:12.592477    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
mount_start_test.go:144: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-20211214192815-1964: (17.186443388s)
--- PASS: TestMountStart/serial/Stop (17.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (48.31s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20211214192815-1964
E1214 19:31:45.951106    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20211214192815-1964: (48.313677211s)
--- PASS: TestMountStart/serial/RestartStopped (48.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.62s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:103: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20211214192815-1964 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (228.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20211214193217-1964 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E1214 19:34:37.717550    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
E1214 19:35:44.871478    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20211214193217-1964 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (3m47.338648354s)
multinode_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 status --alsologtostderr
multinode_test.go:92: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20211214193217-1964 status --alsologtostderr: (1.089676636s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (228.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211214193217-1964 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20211214193217-1964 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (1.834566187s)
multinode_test.go:491: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211214193217-1964 -- rollout status deployment/busybox
multinode_test.go:491: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20211214193217-1964 -- rollout status deployment/busybox: (3.164505655s)
multinode_test.go:497: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211214193217-1964 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:509: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211214193217-1964 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:517: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211214193217-1964 -- exec busybox-84b6686758-p2sz9 -- nslookup kubernetes.io
multinode_test.go:517: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211214193217-1964 -- exec busybox-84b6686758-rdwsm -- nslookup kubernetes.io
multinode_test.go:527: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211214193217-1964 -- exec busybox-84b6686758-p2sz9 -- nslookup kubernetes.default
multinode_test.go:527: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211214193217-1964 -- exec busybox-84b6686758-rdwsm -- nslookup kubernetes.default
multinode_test.go:535: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211214193217-1964 -- exec busybox-84b6686758-p2sz9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:535: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211214193217-1964 -- exec busybox-84b6686758-rdwsm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.49s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:545: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211214193217-1964 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:553: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211214193217-1964 -- exec busybox-84b6686758-p2sz9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:561: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211214193217-1964 -- exec busybox-84b6686758-p2sz9 -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:553: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211214193217-1964 -- exec busybox-84b6686758-rdwsm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:561: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20211214193217-1964 -- exec busybox-84b6686758-rdwsm -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (113.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20211214193217-1964 -v 3 --alsologtostderr
E1214 19:36:45.945622    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-20211214193217-1964 -v 3 --alsologtostderr: (1m52.336117736s)
multinode_test.go:117: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 status --alsologtostderr
multinode_test.go:117: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20211214193217-1964 status --alsologtostderr: (1.566001662s)
--- PASS: TestMultiNode/serial/AddNode (113.90s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (22.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 status --output json --alsologtostderr
E1214 19:38:09.022349    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
multinode_test.go:174: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20211214193217-1964 status --output json --alsologtostderr: (1.55451095s)
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 cp testdata/cp-test.txt multinode-20211214193217-1964:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 ssh -n multinode-20211214193217-1964 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 cp multinode-20211214193217-1964:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mk_cp_test3225374608/cp-test_multinode-20211214193217-1964.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 ssh -n multinode-20211214193217-1964 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 cp multinode-20211214193217-1964:/home/docker/cp-test.txt multinode-20211214193217-1964-m02:/home/docker/cp-test_multinode-20211214193217-1964_multinode-20211214193217-1964-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 ssh -n multinode-20211214193217-1964 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 ssh -n multinode-20211214193217-1964-m02 "sudo cat /home/docker/cp-test_multinode-20211214193217-1964_multinode-20211214193217-1964-m02.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 cp multinode-20211214193217-1964:/home/docker/cp-test.txt multinode-20211214193217-1964-m03:/home/docker/cp-test_multinode-20211214193217-1964_multinode-20211214193217-1964-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 ssh -n multinode-20211214193217-1964 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 ssh -n multinode-20211214193217-1964-m03 "sudo cat /home/docker/cp-test_multinode-20211214193217-1964_multinode-20211214193217-1964-m03.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 cp testdata/cp-test.txt multinode-20211214193217-1964-m02:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 ssh -n multinode-20211214193217-1964-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 cp multinode-20211214193217-1964-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mk_cp_test3225374608/cp-test_multinode-20211214193217-1964-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 ssh -n multinode-20211214193217-1964-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 cp multinode-20211214193217-1964-m02:/home/docker/cp-test.txt multinode-20211214193217-1964:/home/docker/cp-test_multinode-20211214193217-1964-m02_multinode-20211214193217-1964.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 ssh -n multinode-20211214193217-1964-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 ssh -n multinode-20211214193217-1964 "sudo cat /home/docker/cp-test_multinode-20211214193217-1964-m02_multinode-20211214193217-1964.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 cp multinode-20211214193217-1964-m02:/home/docker/cp-test.txt multinode-20211214193217-1964-m03:/home/docker/cp-test_multinode-20211214193217-1964-m02_multinode-20211214193217-1964-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 ssh -n multinode-20211214193217-1964-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 ssh -n multinode-20211214193217-1964-m03 "sudo cat /home/docker/cp-test_multinode-20211214193217-1964-m02_multinode-20211214193217-1964-m03.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 cp testdata/cp-test.txt multinode-20211214193217-1964-m03:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 ssh -n multinode-20211214193217-1964-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 cp multinode-20211214193217-1964-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/mk_cp_test3225374608/cp-test_multinode-20211214193217-1964-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 ssh -n multinode-20211214193217-1964-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 cp multinode-20211214193217-1964-m03:/home/docker/cp-test.txt multinode-20211214193217-1964:/home/docker/cp-test_multinode-20211214193217-1964-m03_multinode-20211214193217-1964.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 ssh -n multinode-20211214193217-1964-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 ssh -n multinode-20211214193217-1964 "sudo cat /home/docker/cp-test_multinode-20211214193217-1964-m03_multinode-20211214193217-1964.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 cp multinode-20211214193217-1964-m03:/home/docker/cp-test.txt multinode-20211214193217-1964-m02:/home/docker/cp-test_multinode-20211214193217-1964-m03_multinode-20211214193217-1964-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 ssh -n multinode-20211214193217-1964-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 ssh -n multinode-20211214193217-1964-m02 "sudo cat /home/docker/cp-test_multinode-20211214193217-1964-m03_multinode-20211214193217-1964-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (22.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (10.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:215: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 node stop m03
multinode_test.go:215: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20211214193217-1964 node stop m03: (7.908861521s)
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 status
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211214193217-1964 status: exit status 7 (1.227324649s)

                                                
                                                
-- stdout --
	multinode-20211214193217-1964
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20211214193217-1964-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20211214193217-1964-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 status --alsologtostderr
multinode_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211214193217-1964 status --alsologtostderr: exit status 7 (1.228197524s)

                                                
                                                
-- stdout --
	multinode-20211214193217-1964
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20211214193217-1964-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20211214193217-1964-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1214 19:38:39.651366    8870 out.go:297] Setting OutFile to fd 1 ...
	I1214 19:38:39.651525    8870 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1214 19:38:39.651529    8870 out.go:310] Setting ErrFile to fd 2...
	I1214 19:38:39.651532    8870 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1214 19:38:39.651600    8870 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/bin
	I1214 19:38:39.651772    8870 out.go:304] Setting JSON to false
	I1214 19:38:39.651785    8870 mustload.go:65] Loading cluster: multinode-20211214193217-1964
	I1214 19:38:39.652026    8870 config.go:176] Loaded profile config "multinode-20211214193217-1964": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4
	I1214 19:38:39.652037    8870 status.go:253] checking status of multinode-20211214193217-1964 ...
	I1214 19:38:39.652418    8870 cli_runner.go:115] Run: docker container inspect multinode-20211214193217-1964 --format={{.State.Status}}
	I1214 19:38:39.770035    8870 status.go:328] multinode-20211214193217-1964 host status = "Running" (err=<nil>)
	I1214 19:38:39.770067    8870 host.go:66] Checking if "multinode-20211214193217-1964" exists ...
	I1214 19:38:39.770362    8870 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20211214193217-1964
	I1214 19:38:39.888737    8870 host.go:66] Checking if "multinode-20211214193217-1964" exists ...
	I1214 19:38:39.889011    8870 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1214 19:38:39.889080    8870 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211214193217-1964
	I1214 19:38:40.006512    8870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59974 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/multinode-20211214193217-1964/id_rsa Username:docker}
	I1214 19:38:40.093919    8870 ssh_runner.go:195] Run: systemctl --version
	I1214 19:38:40.098506    8870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1214 19:38:40.107701    8870 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20211214193217-1964
	I1214 19:38:40.228589    8870 kubeconfig.go:92] found "multinode-20211214193217-1964" server: "https://127.0.0.1:59973"
	I1214 19:38:40.228624    8870 api_server.go:165] Checking apiserver status ...
	I1214 19:38:40.228664    8870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1214 19:38:40.243954    8870 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1941/cgroup
	I1214 19:38:40.252889    8870 api_server.go:181] apiserver freezer: "7:freezer:/docker/ee0b3f83ce1f22a7af85fac0f5b247772e58f6baf13566ed938cb6dd6189d882/kubepods/burstable/pode48c532d647865e7dc63c63892a6d3e9/ca3e260dfa2566f9e7fe79e29a1b4f61ad44584408df33d0b9cad8186d16b850"
	I1214 19:38:40.252972    8870 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ee0b3f83ce1f22a7af85fac0f5b247772e58f6baf13566ed938cb6dd6189d882/kubepods/burstable/pode48c532d647865e7dc63c63892a6d3e9/ca3e260dfa2566f9e7fe79e29a1b4f61ad44584408df33d0b9cad8186d16b850/freezer.state
	I1214 19:38:40.260504    8870 api_server.go:203] freezer state: "THAWED"
	I1214 19:38:40.260520    8870 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59973/healthz ...
	I1214 19:38:40.266403    8870 api_server.go:266] https://127.0.0.1:59973/healthz returned 200:
	ok
	I1214 19:38:40.266416    8870 status.go:419] multinode-20211214193217-1964 apiserver status = Running (err=<nil>)
	I1214 19:38:40.266426    8870 status.go:255] multinode-20211214193217-1964 status: &{Name:multinode-20211214193217-1964 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1214 19:38:40.266441    8870 status.go:253] checking status of multinode-20211214193217-1964-m02 ...
	I1214 19:38:40.266731    8870 cli_runner.go:115] Run: docker container inspect multinode-20211214193217-1964-m02 --format={{.State.Status}}
	I1214 19:38:40.384242    8870 status.go:328] multinode-20211214193217-1964-m02 host status = "Running" (err=<nil>)
	I1214 19:38:40.384270    8870 host.go:66] Checking if "multinode-20211214193217-1964-m02" exists ...
	I1214 19:38:40.384571    8870 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20211214193217-1964-m02
	I1214 19:38:40.501946    8870 host.go:66] Checking if "multinode-20211214193217-1964-m02" exists ...
	I1214 19:38:40.502218    8870 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1214 19:38:40.502278    8870 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20211214193217-1964-m02
	I1214 19:38:40.621713    8870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60307 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/machines/multinode-20211214193217-1964-m02/id_rsa Username:docker}
	I1214 19:38:40.711434    8870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1214 19:38:40.720526    8870 status.go:255] multinode-20211214193217-1964-m02 status: &{Name:multinode-20211214193217-1964-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1214 19:38:40.720557    8870 status.go:253] checking status of multinode-20211214193217-1964-m03 ...
	I1214 19:38:40.720847    8870 cli_runner.go:115] Run: docker container inspect multinode-20211214193217-1964-m03 --format={{.State.Status}}
	I1214 19:38:40.838632    8870 status.go:328] multinode-20211214193217-1964-m03 host status = "Stopped" (err=<nil>)
	I1214 19:38:40.838657    8870 status.go:341] host is not running, skipping remaining checks
	I1214 19:38:40.838662    8870 status.go:255] multinode-20211214193217-1964-m03 status: &{Name:multinode-20211214193217-1964-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (10.36s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (50.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:249: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 node start m03 --alsologtostderr
multinode_test.go:259: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20211214193217-1964 node start m03 --alsologtostderr: (48.974383775s)
multinode_test.go:266: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 status
multinode_test.go:266: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20211214193217-1964 status: (1.576360489s)
multinode_test.go:280: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (50.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (249.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20211214193217-1964
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-20211214193217-1964
E1214 19:39:37.719967    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-20211214193217-1964: (40.421714347s)
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20211214193217-1964 --wait=true -v=8 --alsologtostderr
E1214 19:40:44.879265    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
E1214 19:41:45.945742    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
E1214 19:42:07.949241    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
multinode_test.go:300: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20211214193217-1964 --wait=true -v=8 --alsologtostderr: (3m28.772122358s)
multinode_test.go:305: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20211214193217-1964
--- PASS: TestMultiNode/serial/RestartKeepsNodes (249.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (17.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:399: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 node delete m03
multinode_test.go:399: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20211214193217-1964 node delete m03: (14.455161648s)
multinode_test.go:405: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 status --alsologtostderr
multinode_test.go:405: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20211214193217-1964 status --alsologtostderr: (1.120553053s)
multinode_test.go:419: (dbg) Run:  docker volume ls
multinode_test.go:429: (dbg) Run:  kubectl get nodes
multinode_test.go:429: (dbg) Done: kubectl get nodes: (1.631553156s)
multinode_test.go:437: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (17.38s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (35.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:319: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 stop
multinode_test.go:319: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20211214193217-1964 stop: (34.852504495s)
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 status
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211214193217-1964 status: exit status 7 (268.704507ms)

                                                
                                                
-- stdout --
	multinode-20211214193217-1964
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20211214193217-1964-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:332: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 status --alsologtostderr
multinode_test.go:332: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20211214193217-1964 status --alsologtostderr: exit status 7 (265.373522ms)

                                                
                                                
-- stdout --
	multinode-20211214193217-1964
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20211214193217-1964-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1214 19:44:33.364413    9694 out.go:297] Setting OutFile to fd 1 ...
	I1214 19:44:33.364543    9694 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1214 19:44:33.364548    9694 out.go:310] Setting ErrFile to fd 2...
	I1214 19:44:33.364551    9694 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I1214 19:44:33.364624    9694 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/bin
	I1214 19:44:33.364787    9694 out.go:304] Setting JSON to false
	I1214 19:44:33.364801    9694 mustload.go:65] Loading cluster: multinode-20211214193217-1964
	I1214 19:44:33.365067    9694 config.go:176] Loaded profile config "multinode-20211214193217-1964": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.22.4
	I1214 19:44:33.365079    9694 status.go:253] checking status of multinode-20211214193217-1964 ...
	I1214 19:44:33.365423    9694 cli_runner.go:115] Run: docker container inspect multinode-20211214193217-1964 --format={{.State.Status}}
	I1214 19:44:33.479218    9694 status.go:328] multinode-20211214193217-1964 host status = "Stopped" (err=<nil>)
	I1214 19:44:33.479246    9694 status.go:341] host is not running, skipping remaining checks
	I1214 19:44:33.479265    9694 status.go:255] multinode-20211214193217-1964 status: &{Name:multinode-20211214193217-1964 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1214 19:44:33.479299    9694 status.go:253] checking status of multinode-20211214193217-1964-m02 ...
	I1214 19:44:33.479625    9694 cli_runner.go:115] Run: docker container inspect multinode-20211214193217-1964-m02 --format={{.State.Status}}
	I1214 19:44:33.590947    9694 status.go:328] multinode-20211214193217-1964-m02 host status = "Stopped" (err=<nil>)
	I1214 19:44:33.590974    9694 status.go:341] host is not running, skipping remaining checks
	I1214 19:44:33.590982    9694 status.go:255] multinode-20211214193217-1964-m02 status: &{Name:multinode-20211214193217-1964-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (35.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (148.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:349: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:359: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20211214193217-1964 --wait=true -v=8 --alsologtostderr --driver=docker 
E1214 19:44:37.720850    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
E1214 19:45:44.898750    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
E1214 19:46:45.974795    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
multinode_test.go:359: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20211214193217-1964 --wait=true -v=8 --alsologtostderr --driver=docker : (2m25.163844209s)
multinode_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20211214193217-1964 status --alsologtostderr
multinode_test.go:365: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20211214193217-1964 status --alsologtostderr: (1.11588991s)
multinode_test.go:379: (dbg) Run:  kubectl get nodes
multinode_test.go:379: (dbg) Done: kubectl get nodes: (1.649265536s)
multinode_test.go:387: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (148.08s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (104.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:448: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20211214193217-1964
multinode_test.go:457: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20211214193217-1964-m02 --driver=docker 
multinode_test.go:457: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20211214193217-1964-m02 --driver=docker : exit status 14 (291.669439ms)

                                                
                                                
-- stdout --
	* [multinode-20211214193217-1964-m02] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13173
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20211214193217-1964-m02' is duplicated with machine name 'multinode-20211214193217-1964-m02' in profile 'multinode-20211214193217-1964'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20211214193217-1964-m03 --driver=docker 
E1214 19:47:40.851599    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
multinode_test.go:465: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20211214193217-1964-m03 --driver=docker : (1m26.717414016s)
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20211214193217-1964
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-20211214193217-1964: exit status 80 (601.380285ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20211214193217-1964
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20211214193217-1964-m03 already exists in multinode-20211214193217-1964-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:477: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-20211214193217-1964-m03
multinode_test.go:477: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-20211214193217-1964-m03: (17.115188447s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (104.77s)

                                                
                                    
x
+
TestPreload (207.42s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20211214194909-1964 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0
E1214 19:49:37.746527    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
E1214 19:50:44.900301    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
preload_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-20211214194909-1964 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0: (2m26.21342208s)
preload_test.go:62: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-20211214194909-1964 -- docker pull busybox
preload_test.go:62: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-20211214194909-1964 -- docker pull busybox: (2.632977121s)
preload_test.go:72: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20211214194909-1964 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.17.3
E1214 19:51:45.983052    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
preload_test.go:72: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-20211214194909-1964 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.17.3: (44.770341811s)
preload_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-20211214194909-1964 -- docker images
helpers_test.go:176: Cleaning up "test-preload-20211214194909-1964" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-20211214194909-1964
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-20211214194909-1964: (13.147361414s)
--- PASS: TestPreload (207.42s)

                                                
                                    
x
+
TestScheduledStopUnix (154.8s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-20211214195237-1964 --memory=2048 --driver=docker 
scheduled_stop_test.go:129: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-20211214195237-1964 --memory=2048 --driver=docker : (1m15.983907061s)
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20211214195237-1964 --schedule 5m
scheduled_stop_test.go:192: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20211214195237-1964 -n scheduled-stop-20211214195237-1964
scheduled_stop_test.go:170: signal error was:  <nil>
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20211214195237-1964 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20211214195237-1964 --cancel-scheduled
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20211214195237-1964 -n scheduled-stop-20211214195237-1964
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20211214195237-1964
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20211214195237-1964 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
E1214 19:54:37.744400    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
E1214 19:54:49.051014    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20211214195237-1964
scheduled_stop_test.go:206: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-20211214195237-1964: exit status 7 (151.87093ms)

                                                
                                                
-- stdout --
	scheduled-stop-20211214195237-1964
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20211214195237-1964 -n scheduled-stop-20211214195237-1964
scheduled_stop_test.go:177: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20211214195237-1964 -n scheduled-stop-20211214195237-1964: exit status 7 (150.149ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-20211214195237-1964" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-20211214195237-1964
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-20211214195237-1964: (6.241427379s)
--- PASS: TestScheduledStopUnix (154.80s)

                                                
                                    
x
+
TestSkaffold (122.43s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:57: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe2294260861 version
skaffold_test.go:61: skaffold version: v1.35.1
skaffold_test.go:64: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-20211214195511-1964 --memory=2600 --driver=docker 
E1214 19:55:44.904457    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
skaffold_test.go:64: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-20211214195511-1964 --memory=2600 --driver=docker : (1m16.052162026s)
skaffold_test.go:84: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:108: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe2294260861 run --minikube-profile skaffold-20211214195511-1964 --kube-context skaffold-20211214195511-1964 --status-check=true --port-forward=false --interactive=false
E1214 19:56:45.971364    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
skaffold_test.go:108: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe2294260861 run --minikube-profile skaffold-20211214195511-1964 --kube-context skaffold-20211214195511-1964 --status-check=true --port-forward=false --interactive=false: (21.273856112s)
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:343: "leeroy-app-54fd6f6dc5-2xnsm" [de2ac7b5-0130-474a-9f01-524d9bb93011] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-app healthy within 5.011198209s
skaffold_test.go:117: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:343: "leeroy-web-74f756b9c-xgjh9" [075a765f-ae7a-4dd2-979d-0a885085fad6] Running
skaffold_test.go:117: (dbg) TestSkaffold: app=leeroy-web healthy within 5.00599734s
helpers_test.go:176: Cleaning up "skaffold-20211214195511-1964" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-20211214195511-1964
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-20211214195511-1964: (13.190621431s)
--- PASS: TestSkaffold (122.43s)

                                                
                                    
x
+
TestInsufficientStorage (63.47s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-20211214195714-1964 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-20211214195714-1964 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (50.5337838s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8d435086-9a3d-46fb-9763-919ba91c516d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20211214195714-1964] minikube v1.24.0 on Darwin 11.2.3","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6671d282-8b7d-4806-a58b-6a66cb25e28f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13173"}}
	{"specversion":"1.0","id":"e9349f7a-ee90-4c50-b4e5-87026c7a2f77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig"}}
	{"specversion":"1.0","id":"ae8372cf-9950-4788-b899-58e559284398","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"098f42b2-2f79-4f4c-80c7-f05788fc3494","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube"}}
	{"specversion":"1.0","id":"d33b60af-f7de-44aa-a2f8-042d8e94a540","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f00e9dc6-5b84-41b2-8887-5819ed48e75c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7496aeed-f3a6-4699-a472-72990ce40253","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20211214195714-1964 in cluster insufficient-storage-20211214195714-1964","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c1700634-588d-4c53-a9aa-82d86a7fcf5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b44e4938-61c3-4103-8ffb-6362e0b92ef8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"a0ec6188-e419-4e47-a87c-f1199e0f5527","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20211214195714-1964 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20211214195714-1964 --output=json --layout=cluster: exit status 7 (594.142711ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20211214195714-1964","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.24.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20211214195714-1964","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1214 19:58:05.506821   11667 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20211214195714-1964" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig

                                                
                                                
** /stderr **
status_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20211214195714-1964 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20211214195714-1964 --output=json --layout=cluster: exit status 7 (598.783095ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20211214195714-1964","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.24.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20211214195714-1964","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1214 19:58:06.105908   11684 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20211214195714-1964" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig
	E1214 19:58:06.116724   11684 status.go:557] unable to read event log: stat: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/insufficient-storage-20211214195714-1964/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20211214195714-1964" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-20211214195714-1964
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-20211214195714-1964: (11.73741445s)
--- PASS: TestInsufficientStorage (63.47s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (130.92s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.3274528798.exe start -p running-upgrade-20211214200350-1964 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.3274528798.exe start -p running-upgrade-20211214200350-1964 --memory=2200 --vm-driver=docker : (1m30.356932307s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-20211214200350-1964 --memory=2200 --alsologtostderr -v=1 --driver=docker 
E1214 20:05:44.901446    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
version_upgrade_test.go:137: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-20211214200350-1964 --memory=2200 --alsologtostderr -v=1 --driver=docker : (31.785989089s)
helpers_test.go:176: Cleaning up "running-upgrade-20211214200350-1964" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-20211214200350-1964

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-20211214200350-1964: (7.975689812s)
--- PASS: TestRunningBinaryUpgrade (130.92s)

                                                
                                    
x
+
TestKubernetesUpgrade (233.26s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20211214200014-1964 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E1214 20:00:44.905056    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20211214200014-1964 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : (1m14.416251117s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20211214200014-1964
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20211214200014-1964: (10.147759019s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20211214200014-1964 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-20211214200014-1964 status --format={{.Host}}: exit status 7 (155.468421ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20211214200014-1964 --memory=2200 --kubernetes-version=v1.23.0-rc.1 --alsologtostderr -v=1 --driver=docker 
E1214 20:01:45.973209    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
E1214 20:01:51.173878    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/skaffold-20211214195511-1964/client.crt: no such file or directory
E1214 20:01:51.179169    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/skaffold-20211214195511-1964/client.crt: no such file or directory
E1214 20:01:51.189279    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/skaffold-20211214195511-1964/client.crt: no such file or directory
E1214 20:01:51.209382    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/skaffold-20211214195511-1964/client.crt: no such file or directory
E1214 20:01:51.249533    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/skaffold-20211214195511-1964/client.crt: no such file or directory
E1214 20:01:51.329665    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/skaffold-20211214195511-1964/client.crt: no such file or directory
E1214 20:01:51.499773    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/skaffold-20211214195511-1964/client.crt: no such file or directory
E1214 20:01:51.826068    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/skaffold-20211214195511-1964/client.crt: no such file or directory
E1214 20:01:52.471215    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/skaffold-20211214195511-1964/client.crt: no such file or directory
E1214 20:01:53.757055    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/skaffold-20211214195511-1964/client.crt: no such file or directory
E1214 20:01:56.324040    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/skaffold-20211214195511-1964/client.crt: no such file or directory
E1214 20:02:01.452871    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/skaffold-20211214195511-1964/client.crt: no such file or directory
E1214 20:02:11.702778    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/skaffold-20211214195511-1964/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20211214200014-1964 --memory=2200 --kubernetes-version=v1.23.0-rc.1 --alsologtostderr -v=1 --driver=docker : (2m0.788191878s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20211214200014-1964 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20211214200014-1964 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20211214200014-1964 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (335.853689ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20211214200014-1964] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13173
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.0-rc.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20211214200014-1964
	    minikube start -p kubernetes-upgrade-20211214200014-1964 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20211214200014-19642 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-20211214200014-1964 --kubernetes-version=v1.23.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20211214200014-1964 --memory=2200 --kubernetes-version=v1.23.0-rc.1 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20211214200014-1964 --memory=2200 --kubernetes-version=v1.23.0-rc.1 --alsologtostderr -v=1 --driver=docker : (15.476054694s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20211214200014-1964" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20211214200014-1964
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20211214200014-1964: (11.818637077s)
--- PASS: TestKubernetesUpgrade (233.26s)

                                                
                                    
x
+
TestMissingContainerUpgrade (171.37s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.1985701742.exe start -p missing-upgrade-20211214195818-1964 --memory=2200 --driver=docker 
E1214 19:58:47.983463    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
version_upgrade_test.go:316: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.1985701742.exe start -p missing-upgrade-20211214195818-1964 --memory=2200 --driver=docker : (1m8.26448281s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20211214195818-1964
E1214 19:59:37.742852    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20211214195818-1964: (11.118655265s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20211214195818-1964
version_upgrade_test.go:336: (dbg) Run:  out/minikube-darwin-amd64 start -p missing-upgrade-20211214195818-1964 --memory=2200 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-darwin-amd64 start -p missing-upgrade-20211214195818-1964 --memory=2200 --alsologtostderr -v=1 --driver=docker : (1m19.325891678s)
helpers_test.go:176: Cleaning up "missing-upgrade-20211214195818-1964" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-20211214195818-1964
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-20211214195818-1964: (11.675691691s)
--- PASS: TestMissingContainerUpgrade (171.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.88s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (148.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1235646214.exe start -p stopped-upgrade-20211214200110-1964 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1235646214.exe start -p stopped-upgrade-20211214200110-1964 --memory=2200 --vm-driver=docker : (1m10.429996908s)
version_upgrade_test.go:199: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1235646214.exe -p stopped-upgrade-20211214200110-1964 stop
E1214 20:02:32.188289    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/skaffold-20211214195511-1964/client.crt: no such file or directory
version_upgrade_test.go:199: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1235646214.exe -p stopped-upgrade-20211214200110-1964 stop: (15.564699629s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-20211214200110-1964 --memory=2200 --alsologtostderr -v=1 --driver=docker 
E1214 20:03:13.149716    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/skaffold-20211214195511-1964/client.crt: no such file or directory
version_upgrade_test.go:205: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-20211214200110-1964 --memory=2200 --alsologtostderr -v=1 --driver=docker : (1m2.382317628s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (148.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-20211214200110-1964

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-20211214200110-1964: (3.3169053s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.32s)

                                                
                                    
x
+
TestPause/serial/Start (113.96s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20211214200407-1964 --memory=2048 --install-addons=false --wait=all --driver=docker 
E1214 20:04:20.850851    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
E1214 20:04:35.077024    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/skaffold-20211214195511-1964/client.crt: no such file or directory
E1214 20:04:37.750328    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20211214200407-1964 --memory=2048 --install-addons=false --wait=all --driver=docker : (1m53.958649497s)
--- PASS: TestPause/serial/Start (113.96s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.08s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20211214200407-1964 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20211214200407-1964 --alsologtostderr -v=1 --driver=docker : (8.069844551s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:84: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20211214200601-1964 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:84: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20211214200601-1964 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (351.659844ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20211214200601-1964] minikube v1.24.0 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13173
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (56.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20211214200601-1964 --driver=docker 

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20211214200601-1964 --driver=docker : (55.740482239s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20211214200601-1964 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (56.39s)

                                                
                                    
x
+
TestPause/serial/Pause (0.97s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20211214200407-1964 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.97s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.75s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-20211214200407-1964 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-20211214200407-1964 --output=json --layout=cluster: exit status 2 (751.456018ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20211214200407-1964","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.24.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20211214200407-1964","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.75s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.35s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-20211214200407-1964 --alsologtostderr -v=5
pause_test.go:122: (dbg) Done: out/minikube-darwin-amd64 unpause -p pause-20211214200407-1964 --alsologtostderr -v=5: (1.34513284s)
--- PASS: TestPause/serial/Unpause (1.35s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.11s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20211214200407-1964 --alsologtostderr -v=5
pause_test.go:111: (dbg) Done: out/minikube-darwin-amd64 pause -p pause-20211214200407-1964 --alsologtostderr -v=5: (1.106519374s)
--- PASS: TestPause/serial/PauseAgain (1.11s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (13.28s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-20211214200407-1964 --alsologtostderr -v=5
pause_test.go:133: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-20211214200407-1964 --alsologtostderr -v=5: (13.276081085s)
--- PASS: TestPause/serial/DeletePaused (13.28s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.02s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:169: (dbg) Run:  docker ps -a
pause_test.go:174: (dbg) Run:  docker volume inspect pause-20211214200407-1964
pause_test.go:174: (dbg) Non-zero exit: docker volume inspect pause-20211214200407-1964: exit status 1 (112.73678ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20211214200407-1964

                                                
                                                
** /stderr **
pause_test.go:179: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (1.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (22.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:113: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20211214200601-1964 --no-kubernetes --driver=docker 
no_kubernetes_test.go:113: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20211214200601-1964 --no-kubernetes --driver=docker : (14.352936431s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20211214200601-1964 status -o json
no_kubernetes_test.go:201: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-20211214200601-1964 status -o json: exit status 2 (623.064039ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20211214200601-1964","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:125: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-20211214200601-1964
E1214 20:07:18.921256    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/skaffold-20211214195511-1964/client.crt: no such file or directory
no_kubernetes_test.go:125: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-20211214200601-1964: (7.777003082s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (22.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (37.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20211214200601-1964 --no-kubernetes --driver=docker 

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:137: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20211214200601-1964 --no-kubernetes --driver=docker : (37.617938044s)
--- PASS: TestNoKubernetes/serial/Start (37.62s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (9.22s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.24.0 on darwin
- MINIKUBE_LOCATION=13173
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/upgrade-v1.11.0-to-current4031619315
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/upgrade-v1.11.0-to-current4031619315/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/upgrade-v1.11.0-to-current4031619315/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/upgrade-v1.11.0-to-current4031619315/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (9.22s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.29s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.24.0 on darwin
- MINIKUBE_LOCATION=13173
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/upgrade-v1.2.0-to-current191748737
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/upgrade-v1.2.0-to-current191748737/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/upgrade-v1.2.0-to-current191748737/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/upgrade-v1.2.0-to-current191748737/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:148: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20211214200601-1964 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20211214200601-1964 "sudo systemctl is-active --quiet service kubelet": exit status 1 (602.15855ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:170: (dbg) Run:  out/minikube-darwin-amd64 profile list
* Starting control plane node minikube in cluster minikube
* Download complete!

                                                
                                                
=== CONT  TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (8.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-20211214200601-1964

                                                
                                                
=== CONT  TestNoKubernetes/serial/Stop
no_kubernetes_test.go:159: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-20211214200601-1964: (8.157079661s)
--- PASS: TestNoKubernetes/serial/Stop (8.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (19.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:192: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20211214200601-1964 --driver=docker 
no_kubernetes_test.go:192: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20211214200601-1964 --driver=docker : (19.465516838s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (19.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:148: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20211214200601-1964 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20211214200601-1964 "sudo systemctl is-active --quiet service kubelet": exit status 1 (607.059179ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (123.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-20211214195817-1964 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p auto-20211214195817-1964 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : (2m3.470771995s)
--- PASS: TestNetworkPlugins/group/auto/Start (123.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (113.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p false-20211214195818-1964 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p false-20211214195818-1964 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : (1m53.181034856s)
--- PASS: TestNetworkPlugins/group/false/Start (113.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-20211214195817-1964 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (15.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context auto-20211214195817-1964 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context auto-20211214195817-1964 replace --force -f testdata/netcat-deployment.yaml: (2.146579539s)
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-7bfd7f67bc-p5rs2" [fd6d0c63-9c79-4e9a-b99e-c56654e21d87] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-7bfd7f67bc-p5rs2" [fd6d0c63-9c79-4e9a-b99e-c56654e21d87] Running
E1214 20:14:37.757481    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.015028116s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (15.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Run:  kubectl --context auto-20211214195817-1964 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:182: (dbg) Run:  kubectl --context auto-20211214195817-1964 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:232: (dbg) Run:  kubectl --context auto-20211214195817-1964 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:232: (dbg) Non-zero exit: kubectl --context auto-20211214195817-1964 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.146071434s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (172.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-20211214195818-1964 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p cilium-20211214195818-1964 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : (2m52.892309948s)
--- PASS: TestNetworkPlugins/group/cilium/Start (172.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-20211214195818-1964 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (14.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context false-20211214195818-1964 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context false-20211214195818-1964 replace --force -f testdata/netcat-deployment.yaml: (1.788171711s)
net_test.go:146: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-7bfd7f67bc-xgpgm" [c654a36f-5994-4b14-a7ee-7cacff88ab99] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1214 20:15:27.997756    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
helpers_test.go:343: "netcat-7bfd7f67bc-xgpgm" [c654a36f-5994-4b14-a7ee-7cacff88ab99] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.007056913s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (14.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Run:  kubectl --context false-20211214195818-1964 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:182: (dbg) Run:  kubectl --context false-20211214195818-1964 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:232: (dbg) Run:  kubectl --context false-20211214195818-1964 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:232: (dbg) Non-zero exit: kubectl --context false-20211214195818-1964 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.139314754s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:343: "cilium-rsjj2" [8715bf77-4128-4808-8a06-e958a98ad07c] Running
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.022732687s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cilium-20211214195818-1964 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (16.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context cilium-20211214195818-1964 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context cilium-20211214195818-1964 replace --force -f testdata/netcat-deployment.yaml: (2.385208585s)
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-7bfd7f67bc-5ljvh" [7e9270fc-fad7-47e2-84d4-9ba33fbc50eb] Pending
helpers_test.go:343: "netcat-7bfd7f67bc-5ljvh" [7e9270fc-fad7-47e2-84d4-9ba33fbc50eb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-7bfd7f67bc-5ljvh" [7e9270fc-fad7-47e2-84d4-9ba33fbc50eb] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 14.007882652s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (16.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:163: (dbg) Run:  kubectl --context cilium-20211214195818-1964 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:182: (dbg) Run:  kubectl --context cilium-20211214195818-1964 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:232: (dbg) Run:  kubectl --context cilium-20211214195818-1964 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (69.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-weave-20211214195818-1964 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker 
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p custom-weave-20211214195818-1964 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker : (1m9.046572937s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (69.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-weave-20211214195818-1964 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (13.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context custom-weave-20211214195818-1964 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context custom-weave-20211214195818-1964 replace --force -f testdata/netcat-deployment.yaml: (1.913640399s)
net_test.go:146: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-7bfd7f67bc-6nwth" [2657b253-9fb8-4e4b-a3bc-c732bfc98fb7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1214 20:19:27.078333    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/auto-20211214195817-1964/client.crt: no such file or directory
E1214 20:19:27.083417    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/auto-20211214195817-1964/client.crt: no such file or directory
E1214 20:19:27.093745    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/auto-20211214195817-1964/client.crt: no such file or directory
E1214 20:19:27.115141    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/auto-20211214195817-1964/client.crt: no such file or directory
E1214 20:19:27.155598    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/auto-20211214195817-1964/client.crt: no such file or directory
E1214 20:19:27.242160    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/auto-20211214195817-1964/client.crt: no such file or directory
E1214 20:19:27.403728    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/auto-20211214195817-1964/client.crt: no such file or directory
E1214 20:19:27.729149    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/auto-20211214195817-1964/client.crt: no such file or directory
E1214 20:19:28.376978    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/auto-20211214195817-1964/client.crt: no such file or directory
E1214 20:19:29.657077    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/auto-20211214195817-1964/client.crt: no such file or directory
E1214 20:19:32.225383    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/auto-20211214195817-1964/client.crt: no such file or directory
helpers_test.go:343: "netcat-7bfd7f67bc-6nwth" [2657b253-9fb8-4e4b-a3bc-c732bfc98fb7] Running
E1214 20:19:37.353732    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/auto-20211214195817-1964/client.crt: no such file or directory
E1214 20:19:37.755664    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/addons-20211214190629-1964/client.crt: no such file or directory
net_test.go:146: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 12.00785544s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (13.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (55.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-20211214195817-1964 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 
E1214 20:19:47.603709    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/auto-20211214195817-1964/client.crt: no such file or directory
E1214 20:20:08.087006    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/auto-20211214195817-1964/client.crt: no such file or directory
E1214 20:20:21.095023    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/false-20211214195818-1964/client.crt: no such file or directory
E1214 20:20:21.100171    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/false-20211214195818-1964/client.crt: no such file or directory
E1214 20:20:21.111242    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/false-20211214195818-1964/client.crt: no such file or directory
E1214 20:20:21.136886    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/false-20211214195818-1964/client.crt: no such file or directory
E1214 20:20:21.186860    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/false-20211214195818-1964/client.crt: no such file or directory
E1214 20:20:21.271232    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/false-20211214195818-1964/client.crt: no such file or directory
E1214 20:20:21.436909    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/false-20211214195818-1964/client.crt: no such file or directory
E1214 20:20:21.764741    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/false-20211214195818-1964/client.crt: no such file or directory
E1214 20:20:22.404907    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/false-20211214195818-1964/client.crt: no such file or directory
E1214 20:20:23.687219    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/false-20211214195818-1964/client.crt: no such file or directory
E1214 20:20:26.253338    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/false-20211214195818-1964/client.crt: no such file or directory
E1214 20:20:31.373530    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/false-20211214195818-1964/client.crt: no such file or directory
E1214 20:20:41.730298    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/false-20211214195818-1964/client.crt: no such file or directory
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-20211214195817-1964 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : (55.464491076s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (55.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-20211214195817-1964 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context enable-default-cni-20211214195817-1964 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context enable-default-cni-20211214195817-1964 replace --force -f testdata/netcat-deployment.yaml: (1.828841872s)
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-7bfd7f67bc-c275w" [ea9ca180-7061-433a-a218-3ecfd341d029] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1214 20:20:44.906903    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/ingress-addon-legacy-20211214191813-1964/client.crt: no such file or directory
E1214 20:20:49.053385    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/auto-20211214195817-1964/client.crt: no such file or directory
helpers_test.go:343: "netcat-7bfd7f67bc-c275w" [ea9ca180-7061-433a-a218-3ecfd341d029] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.00644723s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (84.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-20211214195817-1964 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 
E1214 20:26:46.037043    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
E1214 20:26:51.241027    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/skaffold-20211214195511-1964/client.crt: no such file or directory
E1214 20:27:10.123699    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/custom-weave-20211214195818-1964/client.crt: no such file or directory
E1214 20:27:44.257794    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/cilium-20211214195818-1964/client.crt: no such file or directory
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-20211214195817-1964 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : (1m24.022207914s)
--- PASS: TestNetworkPlugins/group/bridge/Start (84.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-20211214195817-1964 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (17.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context bridge-20211214195817-1964 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context bridge-20211214195817-1964 replace --force -f testdata/netcat-deployment.yaml: (1.745078203s)
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-7bfd7f67bc-xqh7c" [79436c09-0aa3-4dc0-be62-593a3b78d3e3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-7bfd7f67bc-xqh7c" [79436c09-0aa3-4dc0-be62-593a3b78d3e3] Running
E1214 20:28:09.114055    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 16.014288809s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (17.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (347.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-20211214195817-1964 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 
E1214 20:31:25.859084    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/enable-default-cni-20211214195817-1964/client.crt: no such file or directory
E1214 20:31:46.036776    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/functional-20211214191315-1964/client.crt: no such file or directory
E1214 20:31:51.239616    1964 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13173-816-7fa4ce093861046ed4d109975b74ec5f157758ca/.minikube/profiles/skaffold-20211214195511-1964/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-20211214195817-1964 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : (5m47.353531752s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (347.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-20211214195817-1964 "pgrep -a kubelet"

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:120: (dbg) Done: out/minikube-darwin-amd64 ssh -p kubenet-20211214195817-1964 "pgrep -a kubelet": (1.003208958s)
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (1.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (14.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context kubenet-20211214195817-1964 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context kubenet-20211214195817-1964 replace --force -f testdata/netcat-deployment.yaml: (1.888746656s)
net_test.go:146: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-7bfd7f67bc-bc5x9" [00fcd3af-9652-4118-9285-f678a6fedc23] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:343: "netcat-7bfd7f67bc-bc5x9" [00fcd3af-9652-4118-9285-f678a6fedc23] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.011374702s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (14.92s)

                                                
                                    

Test skip (18/226)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.4/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.0-rc.1/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.0-rc.1/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:281: registry stabilized in 13.192765ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:343: "registry-6rvrw" [f7b77474-4760-4204-ab5b-2bcfd51e2437] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.020376151s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:343: "registry-proxy-bkd54" [ba9a81aa-956a-4e01-8885-e9a54779bb83] Running
addons_test.go:286: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008083492s
addons_test.go:291: (dbg) Run:  kubectl --context addons-20211214190629-1964 delete po -l run=registry-test --now
addons_test.go:296: (dbg) Run:  kubectl --context addons-20211214190629-1964 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:296: (dbg) Done: kubectl --context addons-20211214190629-1964 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.729311762s)
addons_test.go:306: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (14.84s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (12.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:163: (dbg) Run:  kubectl --context addons-20211214190629-1964 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Run:  kubectl --context addons-20211214190629-1964 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:196: (dbg) Run:  kubectl --context addons-20211214190629-1964 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [5cb765de-d4ae-470f-984f-8b3fd77ef511] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [5cb765de-d4ae-470f-984f-8b3fd77ef511] Running
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.01173121s
addons_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20211214190629-1964 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:233: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (12.29s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (15.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1409: (dbg) Run:  kubectl --context functional-20211214191315-1964 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1415: (dbg) Run:  kubectl --context functional-20211214191315-1964 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1420: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-6cbfcd7cbc-jt8ln" [78b3daa0-3a28-42f6-9fbe-50a62a7268a7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-6cbfcd7cbc-jt8ln" [78b3daa0-3a28-42f6-9fbe-50a62a7268a7] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1420: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 14.007711465s
functional_test.go:1425: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20211214191315-1964 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1425: (dbg) Done: out/minikube-darwin-amd64 -p functional-20211214191315-1964 service list: (1.05391441s)
functional_test.go:1434: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmd (15.19s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:528: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:35: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (33.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:163: (dbg) Run:  kubectl --context ingress-addon-legacy-20211214191813-1964 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:163: (dbg) Done: kubectl --context ingress-addon-legacy-20211214191813-1964 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.185462172s)
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20211214191813-1964 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20211214191813-1964 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (198.14499ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.110.157.72:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20211214191813-1964 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20211214191813-1964 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (164.023661ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.110.157.72:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20211214191813-1964 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20211214191813-1964 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (156.217906ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.110.157.72:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20211214191813-1964 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20211214191813-1964 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (163.526139ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.110.157.72:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20211214191813-1964 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:196: (dbg) Run:  kubectl --context ingress-addon-legacy-20211214191813-1964 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [f35351e9-c449-4112-b7b7-d4f9e497546e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:343: "nginx" [f35351e9-c449-4112-b7b7-d4f9e497546e] Running
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.008097055s
addons_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20211214191813-1964 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:233: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestIngressAddonLegacy/serial/ValidateIngressAddons (33.64s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:77: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20211214195817-1964" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-20211214195817-1964
--- SKIP: TestNetworkPlugins/group/flannel (0.87s)

                                                
                                    
Copied to clipboard