Test Report: Docker_macOS 13730

                    
                      eb19396baacb27bcde6912a0ea5aa6419fc16109:2022-03-29:23253
                    
                

Test fail (4/299)

Order failed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0.19
74 TestFunctional/serial/ExtraConfig 30.8
75 TestFunctional/serial/ComponentHealth 11.24
305 TestNetworkPlugins/group/custom-weave/Start 550.32
x
+
TestDownloadOnly/v1.16.0/preload-exists (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
aaa_download_only_test.go:109: failed to verify preloaded tarball file exists: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/preload-exists (0.19s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (30.8s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:754: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220329101744-2053 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:754: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220329101744-2053 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (26.497012882s)

                                                
                                                
-- stdout --
	* [functional-20220329101744-2053] minikube v1.25.2 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13730
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node functional-20220329101744-2053 in cluster functional-20220329101744-2053
	* Pulling base image ...
	* Updating the running docker "functional-20220329101744-2053" container ...
	* Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0329 10:20:23.716953    3917 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220329101744-2053" hosting pod "coredns-64897985d-cgqv9" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	E0329 10:20:23.721446    3917 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220329101744-2053" hosting pod "etcd-functional-20220329101744-2053" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	E0329 10:20:23.725937    3917 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220329101744-2053" hosting pod "kube-apiserver-functional-20220329101744-2053" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	E0329 10:20:23.926389    3917 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220329101744-2053" hosting pod "kube-controller-manager-functional-20220329101744-2053" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	E0329 10:20:24.326799    3917 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220329101744-2053" hosting pod "kube-proxy-gwwvl" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	E0329 10:20:24.726776    3917 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220329101744-2053" hosting pod "kube-scheduler-functional-20220329101744-2053" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-20220329101744-2053": Get "https://127.0.0.1:52218/api/v1/nodes/functional-20220329101744-2053": EOF
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:756: failed to restart minikube. args "out/minikube-darwin-amd64 start -p functional-20220329101744-2053 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:758: restart took 26.49731526s for "functional-20220329101744-2053" cluster.
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect functional-20220329101744-2053
helpers_test.go:236: (dbg) docker inspect functional-20220329101744-2053:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ea38e6ed40e80ad877ab8f305094fe2a1bd1281cbb276da027659cb556a90c0c",
	        "Created": "2022-03-29T17:17:56.88045748Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 32865,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-29T17:18:05.331720759Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/ea38e6ed40e80ad877ab8f305094fe2a1bd1281cbb276da027659cb556a90c0c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ea38e6ed40e80ad877ab8f305094fe2a1bd1281cbb276da027659cb556a90c0c/hostname",
	        "HostsPath": "/var/lib/docker/containers/ea38e6ed40e80ad877ab8f305094fe2a1bd1281cbb276da027659cb556a90c0c/hosts",
	        "LogPath": "/var/lib/docker/containers/ea38e6ed40e80ad877ab8f305094fe2a1bd1281cbb276da027659cb556a90c0c/ea38e6ed40e80ad877ab8f305094fe2a1bd1281cbb276da027659cb556a90c0c-json.log",
	        "Name": "/functional-20220329101744-2053",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20220329101744-2053:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20220329101744-2053",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c0192e98a580d706dfce60f37c1739397d94b837e28279c9cea240224752ae81-init/diff:/var/lib/docker/overlay2/ed8311db7d03c646788fa108908d5a1d8dddbaed01ecc14e796262809d634ef8/diff:/var/lib/docker/overlay2/fffad9e78bb3ec7e8b5ea36a3468bf8ad790e12f5aed873eb38fef827d7819fe/diff:/var/lib/docker/overlay2/0884cb8dd1433ab9b10af09dfc2db455976180eea4d3e98f437636fe78827973/diff:/var/lib/docker/overlay2/ad6230d3d71ac6d03ec77bba7173b167ee5a5699e7c23d8c189916cda703f311/diff:/var/lib/docker/overlay2/0702d3d4f17839407c1ee9378a34037e4f600920369a546685de73c8539c346d/diff:/var/lib/docker/overlay2/247b25a75047a5666eb37c7cff671c04ce71a5bc50a9956dd60897c584fc1f59/diff:/var/lib/docker/overlay2/837412f364b27df35b5665a99ab6cd231e8de69bae0f73bb28bad6ed0dd128f5/diff:/var/lib/docker/overlay2/531154dc6a26469f67410ef43d135306a3eedbc67f74a21a32538d617c77c318/diff:/var/lib/docker/overlay2/229d67a6594733bc5019727a0811252250289d2f61510cedaa4ead33d70330ab/diff:/var/lib/docker/overlay2/f7b1fb
eba30fb183ce01e8abaf7f8e26555ae1fa899e3dc78c646c25860907e7/diff:/var/lib/docker/overlay2/01f923b173dbf278a094561e0671bf034644bb4c2a4bda564053248bddd1cded/diff:/var/lib/docker/overlay2/912d5b7621231c6a8cbe746efb19c4efdc43b3e5fb88cc798251161aad015dd1/diff:/var/lib/docker/overlay2/d78bba14768b6a4768f26b406b68719bf1f0529762c5414a2e66c102bb2dbe71/diff:/var/lib/docker/overlay2/ffef64e071881a1b5c0da519ac56ce3ed67d288e8acf9cb89dbff58600113528/diff:/var/lib/docker/overlay2/aac4aad89acc5de8af0000e113f40a2b16664b088b5a11558cc39799b58a1c2b/diff:/var/lib/docker/overlay2/e762af91a33bb52ea1c8b1cd14b715553facd0cfc6df4b0c95c127811722532f/diff:/var/lib/docker/overlay2/cb812104812b7b9f9f0fb2017da3ddc5d4dcc14449d0487fc9c5c415525295ee/diff:/var/lib/docker/overlay2/37f2490d17e9c4953ec715b1612c0e27aaec35650d51458e1236463dd4a58b6e/diff:/var/lib/docker/overlay2/40da79e6dfd2a9d937de1adbf1ea8003dbc69aeea53d384c6b6054e1b6bd5175/diff:/var/lib/docker/overlay2/dabd87cf915b8c6844c9e01839884ea36a80ac63b4a0b48807ae1c7e4d50880f/diff:/var/lib/d
ocker/overlay2/52f54f0a2ef32e65e13e2d0291eab6d777c705dab895c2756886dfe506770b67/diff:/var/lib/docker/overlay2/96e29a8e30ef05c20455a905506a683c59782040ec9e915b8ce7a1e350a1ca15/diff:/var/lib/docker/overlay2/d13cd5bc75428f331da47ddc7278e354da3e5bf55a2865a3adcb5a2783d20ece/diff:/var/lib/docker/overlay2/9c2d77ccb10db700ff80711d5423a863d697548e98169f21dd8771fe5295a701/diff:/var/lib/docker/overlay2/76716609e1be01b86880eab934cfb80ef7e4b8aebfb34e3f0a319c7749f2c0a6/diff:/var/lib/docker/overlay2/1e437f9ef5e019af60a5e498295b2bbc9912f077fc593b09eb222057079fedf8/diff:/var/lib/docker/overlay2/63e12553dff4fbd6ccd84c18767825ca3205f216fec3abf3996238c658aed421/diff:/var/lib/docker/overlay2/3855d310128c2ce844b1c4eec4376b6508c7d9ebde1c06803b3d4a699b47d7a2/diff:/var/lib/docker/overlay2/bbde0e6b887250e86f09f8e916dbc892e9c55c55145dcd05872c2a429dd6688c/diff:/var/lib/docker/overlay2/77097752f9b18de746139649a4f9cf5778c6a556501cdeae76b30c84102a2ad4/diff:/var/lib/docker/overlay2/c81fc30e11f320f4767cd26e30359508e11c5abcf3dea34ce5a66eef8e5
fafe6/diff:/var/lib/docker/overlay2/4ea9241abe44119b374b7e129cca5b4fe04d087ca36c3b57967c69eb724c6c81/diff:/var/lib/docker/overlay2/606560f69d52a56c66502f40fecef53f0f79cd1f4f513169ed1203b66db79b5f/diff:/var/lib/docker/overlay2/6712571c65e7f0f6617d12edd03e18393d7353546adec47601cd6474de23f21d/diff:/var/lib/docker/overlay2/13c973edc7a5a46c7a8d3e2b9176907b3e98b21624bd35bd83da411699f16e4e/diff:/var/lib/docker/overlay2/030befada2a963170d65bb2a31ec0ec42eba1ff1bb8aba8240cbf840d7d3d371/diff:/var/lib/docker/overlay2/ac9d59d6e110e5187326f8e7cb3b7a2ccd103fecc01cc685430e655ee9e65443/diff:/var/lib/docker/overlay2/57e60640eba0ca057651f8e9237e8cf8463274e500717b34328689375e822439/diff:/var/lib/docker/overlay2/aab485f0e5d1476b141999faa273823cd68233c08265b68d5a8ed0ae024b00af/diff:/var/lib/docker/overlay2/f7ba5605eda2c32092b20df56b38d4b4087611c36717abedaf45e38c4f5772a7/diff:/var/lib/docker/overlay2/a880e812dfcc4e2d109077663acc43d91ae2f9fb6aafa9778fcc4ea35b2bd270/diff:/var/lib/docker/overlay2/74d9ed5ae5bcc9f703b641f24730f252bfe3bf
25347ddb81d302b2a68396b787/diff:/var/lib/docker/overlay2/fc3eaabab34142464de90a97b9b7a9b2eb1a4f4a0c9c47d674162d116597e1cb/diff:/var/lib/docker/overlay2/925393f6e25c9f479452aa9b73178021bd52b4995e89da06dcaf12b58e6a738e/diff:/var/lib/docker/overlay2/abbcdd25ad33e602512274bdca503a41f0ad5acecd3d040e23bb86b8d0b7ac67/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c0192e98a580d706dfce60f37c1739397d94b837e28279c9cea240224752ae81/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c0192e98a580d706dfce60f37c1739397d94b837e28279c9cea240224752ae81/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c0192e98a580d706dfce60f37c1739397d94b837e28279c9cea240224752ae81/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20220329101744-2053",
	                "Source": "/var/lib/docker/volumes/functional-20220329101744-2053/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20220329101744-2053",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20220329101744-2053",
	                "name.minikube.sigs.k8s.io": "functional-20220329101744-2053",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c714e37f4bf7b39a979c22e4b123cc0bfe6beaa1d68fa2f2f3d4a22cbeca9452",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52219"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52215"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52216"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52217"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52218"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c714e37f4bf7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20220329101744-2053": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ea38e6ed40e8",
	                        "functional-20220329101744-2053"
	                    ],
	                    "NetworkID": "ee1e6efd28dce3238bcf209dff9dd4541068959a153a07c92ad2417dcb04ff76",
	                    "EndpointID": "5ff0d6b4aeaac202ef9f262a0af7ff981c5fa7b8eeb1a475ea2fc338ea6c0186",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20220329101744-2053 -n functional-20220329101744-2053
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20220329101744-2053 -n functional-20220329101744-2053: exit status 2 (618.645675ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220329101744-2053 logs -n 25: (2.905217733s)
helpers_test.go:253: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------|--------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                                    Args                                     |            Profile             |  User   | Version |          Start Time           |           End Time            |
	|---------|-----------------------------------------------------------------------------|--------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | nospam-20220329101558-2053 --log_dir                                        | nospam-20220329101558-2053     | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:17:17 PDT | Tue, 29 Mar 2022 10:17:17 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 |                                |         |         |                               |                               |
	|         | pause                                                                       |                                |         |         |                               |                               |
	| -p      | nospam-20220329101558-2053 --log_dir                                        | nospam-20220329101558-2053     | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:17:17 PDT | Tue, 29 Mar 2022 10:17:18 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 |                                |         |         |                               |                               |
	|         | unpause                                                                     |                                |         |         |                               |                               |
	| -p      | nospam-20220329101558-2053 --log_dir                                        | nospam-20220329101558-2053     | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:17:18 PDT | Tue, 29 Mar 2022 10:17:19 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 |                                |         |         |                               |                               |
	|         | unpause                                                                     |                                |         |         |                               |                               |
	| -p      | nospam-20220329101558-2053 --log_dir                                        | nospam-20220329101558-2053     | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:17:19 PDT | Tue, 29 Mar 2022 10:17:19 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 |                                |         |         |                               |                               |
	|         | unpause                                                                     |                                |         |         |                               |                               |
	| -p      | nospam-20220329101558-2053 --log_dir                                        | nospam-20220329101558-2053     | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:17:20 PDT | Tue, 29 Mar 2022 10:17:37 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 |                                |         |         |                               |                               |
	|         | stop                                                                        |                                |         |         |                               |                               |
	| -p      | nospam-20220329101558-2053 --log_dir                                        | nospam-20220329101558-2053     | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:17:37 PDT | Tue, 29 Mar 2022 10:17:37 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 |                                |         |         |                               |                               |
	|         | stop                                                                        |                                |         |         |                               |                               |
	| -p      | nospam-20220329101558-2053 --log_dir                                        | nospam-20220329101558-2053     | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:17:37 PDT | Tue, 29 Mar 2022 10:17:38 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 |                                |         |         |                               |                               |
	|         | stop                                                                        |                                |         |         |                               |                               |
	| delete  | -p nospam-20220329101558-2053                                               | nospam-20220329101558-2053     | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:17:38 PDT | Tue, 29 Mar 2022 10:17:44 PDT |
	| start   | -p                                                                          | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:17:44 PDT | Tue, 29 Mar 2022 10:19:47 PDT |
	|         | functional-20220329101744-2053                                              |                                |         |         |                               |                               |
	|         | --memory=4000                                                               |                                |         |         |                               |                               |
	|         | --apiserver-port=8441                                                       |                                |         |         |                               |                               |
	|         | --wait=all --driver=docker                                                  |                                |         |         |                               |                               |
	| start   | -p                                                                          | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:19:47 PDT | Tue, 29 Mar 2022 10:19:54 PDT |
	|         | functional-20220329101744-2053                                              |                                |         |         |                               |                               |
	|         | --alsologtostderr -v=8                                                      |                                |         |         |                               |                               |
	| -p      | functional-20220329101744-2053                                              | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:19:56 PDT | Tue, 29 Mar 2022 10:19:58 PDT |
	|         | cache add k8s.gcr.io/pause:3.1                                              |                                |         |         |                               |                               |
	| -p      | functional-20220329101744-2053                                              | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:19:58 PDT | Tue, 29 Mar 2022 10:20:00 PDT |
	|         | cache add k8s.gcr.io/pause:3.3                                              |                                |         |         |                               |                               |
	| -p      | functional-20220329101744-2053                                              | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:00 PDT | Tue, 29 Mar 2022 10:20:02 PDT |
	|         | cache add                                                                   |                                |         |         |                               |                               |
	|         | k8s.gcr.io/pause:latest                                                     |                                |         |         |                               |                               |
	| -p      | functional-20220329101744-2053 cache add                                    | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:02 PDT | Tue, 29 Mar 2022 10:20:04 PDT |
	|         | minikube-local-cache-test:functional-20220329101744-2053                    |                                |         |         |                               |                               |
	| -p      | functional-20220329101744-2053 cache delete                                 | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:04 PDT | Tue, 29 Mar 2022 10:20:04 PDT |
	|         | minikube-local-cache-test:functional-20220329101744-2053                    |                                |         |         |                               |                               |
	| cache   | delete k8s.gcr.io/pause:3.3                                                 | minikube                       | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:04 PDT | Tue, 29 Mar 2022 10:20:04 PDT |
	| cache   | list                                                                        | minikube                       | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:04 PDT | Tue, 29 Mar 2022 10:20:04 PDT |
	| -p      | functional-20220329101744-2053                                              | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:04 PDT | Tue, 29 Mar 2022 10:20:05 PDT |
	|         | ssh sudo crictl images                                                      |                                |         |         |                               |                               |
	| -p      | functional-20220329101744-2053                                              | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:05 PDT | Tue, 29 Mar 2022 10:20:05 PDT |
	|         | ssh sudo docker rmi                                                         |                                |         |         |                               |                               |
	|         | k8s.gcr.io/pause:latest                                                     |                                |         |         |                               |                               |
	| -p      | functional-20220329101744-2053                                              | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:06 PDT | Tue, 29 Mar 2022 10:20:07 PDT |
	|         | cache reload                                                                |                                |         |         |                               |                               |
	| -p      | functional-20220329101744-2053                                              | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:07 PDT | Tue, 29 Mar 2022 10:20:08 PDT |
	|         | ssh sudo crictl inspecti                                                    |                                |         |         |                               |                               |
	|         | k8s.gcr.io/pause:latest                                                     |                                |         |         |                               |                               |
	| cache   | delete k8s.gcr.io/pause:3.1                                                 | minikube                       | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:08 PDT | Tue, 29 Mar 2022 10:20:08 PDT |
	| cache   | delete k8s.gcr.io/pause:latest                                              | minikube                       | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:08 PDT | Tue, 29 Mar 2022 10:20:08 PDT |
	| -p      | functional-20220329101744-2053                                              | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:08 PDT | Tue, 29 Mar 2022 10:20:09 PDT |
	|         | kubectl -- --context                                                        |                                |         |         |                               |                               |
	|         | functional-20220329101744-2053                                              |                                |         |         |                               |                               |
	|         | get pods                                                                    |                                |         |         |                               |                               |
	| kubectl | --profile=functional-20220329101744-2053                                    | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:09 PDT | Tue, 29 Mar 2022 10:20:09 PDT |
	|         | -- --context                                                                |                                |         |         |                               |                               |
	|         | functional-20220329101744-2053 get pods                                     |                                |         |         |                               |                               |
	|---------|-----------------------------------------------------------------------------|--------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/29 10:20:09
	Running on machine: 37310
	Binary: Built with gc go1.17.7 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0329 10:20:09.803708    3917 out.go:297] Setting OutFile to fd 1 ...
	I0329 10:20:09.803842    3917 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 10:20:09.803845    3917 out.go:310] Setting ErrFile to fd 2...
	I0329 10:20:09.803848    3917 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 10:20:09.803918    3917 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/bin
	I0329 10:20:09.804161    3917 out.go:304] Setting JSON to false
	I0329 10:20:09.819165    3917 start.go:114] hostinfo: {"hostname":"37310.local","uptime":1184,"bootTime":1648573225,"procs":319,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0329 10:20:09.819244    3917 start.go:122] gopshost.Virtualization returned error: not implemented yet
	I0329 10:20:09.846104    3917 out.go:176] * [functional-20220329101744-2053] minikube v1.25.2 on Darwin 11.2.3
	I0329 10:20:09.846223    3917 notify.go:193] Checking for updates...
	I0329 10:20:09.872138    3917 out.go:176]   - MINIKUBE_LOCATION=13730
	I0329 10:20:09.897704    3917 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 10:20:09.923792    3917 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0329 10:20:09.949763    3917 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0329 10:20:09.975610    3917 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	I0329 10:20:09.975999    3917 config.go:176] Loaded profile config "functional-20220329101744-2053": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 10:20:09.976035    3917 driver.go:346] Setting default libvirt URI to qemu:///system
	I0329 10:20:10.074872    3917 docker.go:137] docker version: linux-20.10.6
	I0329 10:20:10.074995    3917 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 10:20:10.256426    3917 info.go:263] docker info: {ID:4GJZ:5WQJ:PTOH:OBGV:UGLB:2QMR:SRUC:WPW4:I7LT:V2VN:S3VH:GWN3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:53 SystemTime:2022-03-29 17:20:10.199972347 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0329 10:20:10.304949    3917 out.go:176] * Using the docker driver based on existing profile
	I0329 10:20:10.304967    3917 start.go:283] selected driver: docker
	I0329 10:20:10.304971    3917 start.go:800] validating driver "docker" against &{Name:functional-20220329101744-2053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220329101744-2053 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false
volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 10:20:10.305054    3917 start.go:811] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0329 10:20:10.305264    3917 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 10:20:10.489753    3917 info.go:263] docker info: {ID:4GJZ:5WQJ:PTOH:OBGV:UGLB:2QMR:SRUC:WPW4:I7LT:V2VN:S3VH:GWN3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:53 SystemTime:2022-03-29 17:20:10.433765936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0329 10:20:10.491741    3917 start_flags.go:837] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0329 10:20:10.491762    3917 cni.go:93] Creating CNI manager for ""
	I0329 10:20:10.491772    3917 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0329 10:20:10.491783    3917 start_flags.go:306] config:
	{Name:functional-20220329101744-2053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220329101744-2053 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false
volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 10:20:10.540598    3917 out.go:176] * Starting control plane node functional-20220329101744-2053 in cluster functional-20220329101744-2053
	I0329 10:20:10.540656    3917 cache.go:120] Beginning downloading kic base image for docker with docker
	I0329 10:20:10.566384    3917 out.go:176] * Pulling base image ...
	I0329 10:20:10.566452    3917 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 10:20:10.566514    3917 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0329 10:20:10.566523    3917 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0329 10:20:10.566544    3917 cache.go:57] Caching tarball of preloaded images
	I0329 10:20:10.566762    3917 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0329 10:20:10.566782    3917 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0329 10:20:10.567998    3917 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/config.json ...
	I0329 10:20:10.687336    3917 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0329 10:20:10.687355    3917 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0329 10:20:10.687363    3917 cache.go:208] Successfully downloaded all kic artifacts
	I0329 10:20:10.687422    3917 start.go:348] acquiring machines lock for functional-20220329101744-2053: {Name:mk70efe01c61c1665e9dd1baf19d51fbdfc798fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 10:20:10.687511    3917 start.go:352] acquired machines lock for "functional-20220329101744-2053" in 72.401µs
	I0329 10:20:10.687532    3917 start.go:94] Skipping create...Using existing machine configuration
	I0329 10:20:10.687540    3917 fix.go:55] fixHost starting: 
	I0329 10:20:10.687802    3917 cli_runner.go:133] Run: docker container inspect functional-20220329101744-2053 --format={{.State.Status}}
	I0329 10:20:10.804808    3917 fix.go:108] recreateIfNeeded on functional-20220329101744-2053: state=Running err=<nil>
	W0329 10:20:10.804828    3917 fix.go:134] unexpected machine state, will restart: <nil>
	I0329 10:20:10.853326    3917 out.go:176] * Updating the running docker "functional-20220329101744-2053" container ...
	I0329 10:20:10.853351    3917 machine.go:88] provisioning docker machine ...
	I0329 10:20:10.853372    3917 ubuntu.go:169] provisioning hostname "functional-20220329101744-2053"
	I0329 10:20:10.853477    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:10.971089    3917 main.go:130] libmachine: Using SSH client type: native
	I0329 10:20:10.971286    3917 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13a3660] 0x13a6740 <nil>  [] 0s} 127.0.0.1 52219 <nil> <nil>}
	I0329 10:20:10.971301    3917 main.go:130] libmachine: About to run SSH command:
	sudo hostname functional-20220329101744-2053 && echo "functional-20220329101744-2053" | sudo tee /etc/hostname
	I0329 10:20:11.101732    3917 main.go:130] libmachine: SSH cmd err, output: <nil>: functional-20220329101744-2053
	
	I0329 10:20:11.101830    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:11.217800    3917 main.go:130] libmachine: Using SSH client type: native
	I0329 10:20:11.217950    3917 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13a3660] 0x13a6740 <nil>  [] 0s} 127.0.0.1 52219 <nil> <nil>}
	I0329 10:20:11.217962    3917 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-20220329101744-2053' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-20220329101744-2053/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-20220329101744-2053' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0329 10:20:11.338348    3917 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0329 10:20:11.338362    3917 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube}
	I0329 10:20:11.338381    3917 ubuntu.go:177] setting up certificates
	I0329 10:20:11.338391    3917 provision.go:83] configureAuth start
	I0329 10:20:11.338487    3917 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20220329101744-2053
	I0329 10:20:11.454609    3917 provision.go:138] copyHostCerts
	I0329 10:20:11.454717    3917 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem, removing ...
	I0329 10:20:11.454723    3917 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem
	I0329 10:20:11.454828    3917 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem (1123 bytes)
	I0329 10:20:11.455023    3917 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem, removing ...
	I0329 10:20:11.455031    3917 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem
	I0329 10:20:11.455084    3917 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem (1679 bytes)
	I0329 10:20:11.455220    3917 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem, removing ...
	I0329 10:20:11.455223    3917 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem
	I0329 10:20:11.455278    3917 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem (1078 bytes)
	I0329 10:20:11.455394    3917 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem org=jenkins.functional-20220329101744-2053 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-20220329101744-2053]
	I0329 10:20:11.608674    3917 provision.go:172] copyRemoteCerts
	I0329 10:20:11.608734    3917 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0329 10:20:11.608786    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:11.729207    3917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/functional-20220329101744-2053/id_rsa Username:docker}
	I0329 10:20:11.817638    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0329 10:20:11.835172    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0329 10:20:11.853791    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0329 10:20:11.870496    3917 provision.go:86] duration metric: configureAuth took 532.097423ms
	I0329 10:20:11.870505    3917 ubuntu.go:193] setting minikube options for container-runtime
	I0329 10:20:11.870715    3917 config.go:176] Loaded profile config "functional-20220329101744-2053": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 10:20:11.870784    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:11.987282    3917 main.go:130] libmachine: Using SSH client type: native
	I0329 10:20:11.987413    3917 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13a3660] 0x13a6740 <nil>  [] 0s} 127.0.0.1 52219 <nil> <nil>}
	I0329 10:20:11.987418    3917 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0329 10:20:12.110158    3917 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0329 10:20:12.110167    3917 ubuntu.go:71] root file system type: overlay
	I0329 10:20:12.110322    3917 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0329 10:20:12.110415    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:12.227356    3917 main.go:130] libmachine: Using SSH client type: native
	I0329 10:20:12.227498    3917 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13a3660] 0x13a6740 <nil>  [] 0s} 127.0.0.1 52219 <nil> <nil>}
	I0329 10:20:12.227558    3917 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0329 10:20:12.353975    3917 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0329 10:20:12.354079    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:12.470312    3917 main.go:130] libmachine: Using SSH client type: native
	I0329 10:20:12.470442    3917 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13a3660] 0x13a6740 <nil>  [] 0s} 127.0.0.1 52219 <nil> <nil>}
	I0329 10:20:12.470452    3917 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0329 10:20:12.595345    3917 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0329 10:20:12.595354    3917 machine.go:91] provisioned docker machine in 1.74200228s
	I0329 10:20:12.595359    3917 start.go:302] post-start starting for "functional-20220329101744-2053" (driver="docker")
	I0329 10:20:12.595362    3917 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0329 10:20:12.595443    3917 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0329 10:20:12.595500    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:12.710204    3917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/functional-20220329101744-2053/id_rsa Username:docker}
	I0329 10:20:12.797305    3917 ssh_runner.go:195] Run: cat /etc/os-release
	I0329 10:20:12.801221    3917 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0329 10:20:12.801232    3917 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0329 10:20:12.801237    3917 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0329 10:20:12.801239    3917 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0329 10:20:12.801247    3917 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/addons for local assets ...
	I0329 10:20:12.801351    3917 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files for local assets ...
	I0329 10:20:12.801495    3917 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/20532.pem -> 20532.pem in /etc/ssl/certs
	I0329 10:20:12.801654    3917 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/test/nested/copy/2053/hosts -> hosts in /etc/test/nested/copy/2053
	I0329 10:20:12.801713    3917 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2053
	I0329 10:20:12.808899    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/20532.pem --> /etc/ssl/certs/20532.pem (1708 bytes)
	I0329 10:20:12.825855    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/test/nested/copy/2053/hosts --> /etc/test/nested/copy/2053/hosts (40 bytes)
	I0329 10:20:12.843428    3917 start.go:305] post-start completed in 248.055702ms
	I0329 10:20:12.843514    3917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0329 10:20:12.843574    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:12.957452    3917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/functional-20220329101744-2053/id_rsa Username:docker}
	I0329 10:20:13.041831    3917 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0329 10:20:13.046850    3917 fix.go:57] fixHost completed within 2.359314315s
	I0329 10:20:13.046860    3917 start.go:81] releasing machines lock for "functional-20220329101744-2053", held for 2.359347287s
	I0329 10:20:13.046957    3917 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20220329101744-2053
	I0329 10:20:13.166446    3917 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0329 10:20:13.166452    3917 ssh_runner.go:195] Run: systemctl --version
	I0329 10:20:13.166513    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:13.166529    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:13.293600    3917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/functional-20220329101744-2053/id_rsa Username:docker}
	I0329 10:20:13.293604    3917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/functional-20220329101744-2053/id_rsa Username:docker}
	I0329 10:20:13.476441    3917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0329 10:20:13.487484    3917 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 10:20:13.497103    3917 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0329 10:20:13.497160    3917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0329 10:20:13.506595    3917 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0329 10:20:13.518846    3917 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0329 10:20:13.595391    3917 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0329 10:20:13.670375    3917 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 10:20:13.680244    3917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0329 10:20:13.755866    3917 ssh_runner.go:195] Run: sudo systemctl start docker
	I0329 10:20:13.765969    3917 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 10:20:13.803832    3917 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 10:20:13.868638    3917 out.go:203] * Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	I0329 10:20:13.868840    3917 cli_runner.go:133] Run: docker exec -t functional-20220329101744-2053 dig +short host.docker.internal
	I0329 10:20:14.045443    3917 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0329 10:20:14.045541    3917 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0329 10:20:14.049965    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:14.188954    3917 out.go:176]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0329 10:20:14.189110    3917 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 10:20:14.189279    3917 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0329 10:20:14.221288    3917 docker.go:606] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-20220329101744-2053
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.3
	k8s.gcr.io/pause:3.1
	k8s.gcr.io/pause:latest
	
	-- /stdout --
	I0329 10:20:14.221299    3917 docker.go:537] Images already preloaded, skipping extraction
	I0329 10:20:14.221395    3917 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0329 10:20:14.250989    3917 docker.go:606] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-20220329101744-2053
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.3
	k8s.gcr.io/pause:3.1
	k8s.gcr.io/pause:latest
	
	-- /stdout --
	I0329 10:20:14.251003    3917 cache_images.go:84] Images are preloaded, skipping loading
	I0329 10:20:14.251104    3917 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0329 10:20:14.330755    3917 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0329 10:20:14.330771    3917 cni.go:93] Creating CNI manager for ""
	I0329 10:20:14.330777    3917 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0329 10:20:14.330784    3917 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0329 10:20:14.330797    3917 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-20220329101744-2053 NodeName:functional-20220329101744-2053 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0329 10:20:14.330906    3917 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "functional-20220329101744-2053"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0329 10:20:14.330998    3917 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=functional-20220329101744-2053 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:functional-20220329101744-2053 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0329 10:20:14.331060    3917 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0329 10:20:14.338941    3917 binaries.go:44] Found k8s binaries, skipping transfer
	I0329 10:20:14.339011    3917 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0329 10:20:14.346340    3917 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (356 bytes)
	I0329 10:20:14.359122    3917 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0329 10:20:14.372031    3917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1902 bytes)
	I0329 10:20:14.385069    3917 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0329 10:20:14.388886    3917 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053 for IP: 192.168.49.2
	I0329 10:20:14.389018    3917 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key
	I0329 10:20:14.389068    3917 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key
	I0329 10:20:14.389151    3917 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.key
	I0329 10:20:14.389219    3917 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/apiserver.key.dd3b5fb2
	I0329 10:20:14.389272    3917 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/proxy-client.key
	I0329 10:20:14.389475    3917 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/2053.pem (1338 bytes)
	W0329 10:20:14.389518    3917 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/2053_empty.pem, impossibly tiny 0 bytes
	I0329 10:20:14.389534    3917 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem (1675 bytes)
	I0329 10:20:14.389575    3917 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem (1078 bytes)
	I0329 10:20:14.389611    3917 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem (1123 bytes)
	I0329 10:20:14.389640    3917 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem (1679 bytes)
	I0329 10:20:14.389709    3917 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/20532.pem (1708 bytes)
	I0329 10:20:14.390284    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0329 10:20:14.407520    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0329 10:20:14.424808    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0329 10:20:14.442958    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0329 10:20:14.460282    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0329 10:20:14.481065    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0329 10:20:14.500298    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0329 10:20:14.519050    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0329 10:20:14.536906    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0329 10:20:14.553966    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/2053.pem --> /usr/share/ca-certificates/2053.pem (1338 bytes)
	I0329 10:20:14.571410    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/20532.pem --> /usr/share/ca-certificates/20532.pem (1708 bytes)
	I0329 10:20:14.589614    3917 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0329 10:20:14.603333    3917 ssh_runner.go:195] Run: openssl version
	I0329 10:20:14.609100    3917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20532.pem && ln -fs /usr/share/ca-certificates/20532.pem /etc/ssl/certs/20532.pem"
	I0329 10:20:14.616901    3917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20532.pem
	I0329 10:20:14.621145    3917 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 29 17:17 /usr/share/ca-certificates/20532.pem
	I0329 10:20:14.621196    3917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20532.pem
	I0329 10:20:14.627067    3917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20532.pem /etc/ssl/certs/3ec20f2e.0"
	I0329 10:20:14.634666    3917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0329 10:20:14.642793    3917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0329 10:20:14.647552    3917 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 29 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0329 10:20:14.647640    3917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0329 10:20:14.653568    3917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0329 10:20:14.661017    3917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2053.pem && ln -fs /usr/share/ca-certificates/2053.pem /etc/ssl/certs/2053.pem"
	I0329 10:20:14.670246    3917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2053.pem
	I0329 10:20:14.674381    3917 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 29 17:17 /usr/share/ca-certificates/2053.pem
	I0329 10:20:14.674430    3917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2053.pem
	I0329 10:20:14.680092    3917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2053.pem /etc/ssl/certs/51391683.0"
	I0329 10:20:14.688214    3917 kubeadm.go:391] StartCluster: {Name:functional-20220329101744-2053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220329101744-2053 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false
storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 10:20:14.688358    3917 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0329 10:20:14.716654    3917 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0329 10:20:14.724460    3917 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0329 10:20:14.724472    3917 kubeadm.go:601] restartCluster start
	I0329 10:20:14.724523    3917 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0329 10:20:14.731580    3917 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0329 10:20:14.731660    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:14.920891    3917 kubeconfig.go:92] found "functional-20220329101744-2053" server: "https://127.0.0.1:52218"
	I0329 10:20:14.921811    3917 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0329 10:20:14.929694    3917 kubeadm.go:569] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2022-03-29 17:18:36.359627992 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2022-03-29 17:20:14.401322431 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0329 10:20:14.929704    3917 kubeadm.go:1067] stopping kube-system containers ...
	I0329 10:20:14.929774    3917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0329 10:20:14.959374    3917 docker.go:438] Stopping containers: [7dbb2ebc006a 162cb93e1f50 de8290e9fe44 02b83e91f9c6 2bcc7f96a251 2309664b2dca 2c02e68a12d6 6d4f9784ec46 e94d2bb30a5c d60fd8047ed0 00593e4ce0a9 b4206332236c 11b46fa961b3 11c80f833d8d 45cc2e7c49eb]
	I0329 10:20:14.959465    3917 ssh_runner.go:195] Run: docker stop 7dbb2ebc006a 162cb93e1f50 de8290e9fe44 02b83e91f9c6 2bcc7f96a251 2309664b2dca 2c02e68a12d6 6d4f9784ec46 e94d2bb30a5c d60fd8047ed0 00593e4ce0a9 b4206332236c 11b46fa961b3 11c80f833d8d 45cc2e7c49eb
	I0329 10:20:20.175191    3917 ssh_runner.go:235] Completed: docker stop 7dbb2ebc006a 162cb93e1f50 de8290e9fe44 02b83e91f9c6 2bcc7f96a251 2309664b2dca 2c02e68a12d6 6d4f9784ec46 e94d2bb30a5c d60fd8047ed0 00593e4ce0a9 b4206332236c 11b46fa961b3 11c80f833d8d 45cc2e7c49eb: (5.215695554s)
	I0329 10:20:20.175271    3917 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0329 10:20:20.213817    3917 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0329 10:20:20.224229    3917 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Mar 29 17:18 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Mar 29 17:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Mar 29 17:18 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Mar 29 17:18 /etc/kubernetes/scheduler.conf
	
	I0329 10:20:20.224296    3917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0329 10:20:20.234715    3917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0329 10:20:20.243873    3917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0329 10:20:20.252674    3917 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0329 10:20:20.252738    3917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0329 10:20:20.278277    3917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0329 10:20:20.288617    3917 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0329 10:20:20.288677    3917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0329 10:20:20.300580    3917 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0329 10:20:20.308273    3917 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0329 10:20:20.308284    3917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0329 10:20:20.386047    3917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0329 10:20:21.499092    3917 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.11301405s)
	I0329 10:20:21.499107    3917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0329 10:20:21.644328    3917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0329 10:20:21.712599    3917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0329 10:20:21.813832    3917 api_server.go:51] waiting for apiserver process to appear ...
	I0329 10:20:21.813915    3917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0329 10:20:21.824838    3917 api_server.go:71] duration metric: took 11.024686ms to wait for apiserver process to appear ...
	I0329 10:20:21.824854    3917 api_server.go:87] waiting for apiserver healthz status ...
	I0329 10:20:21.824865    3917 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52218/healthz ...
	I0329 10:20:22.003709    3917 api_server.go:266] https://127.0.0.1:52218/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0329 10:20:22.003720    3917 api_server.go:102] status: https://127.0.0.1:52218/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0329 10:20:22.504507    3917 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52218/healthz ...
	I0329 10:20:22.511850    3917 api_server.go:266] https://127.0.0.1:52218/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0329 10:20:22.511859    3917 api_server.go:102] status: https://127.0.0.1:52218/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0329 10:20:23.004168    3917 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52218/healthz ...
	I0329 10:20:23.009465    3917 api_server.go:266] https://127.0.0.1:52218/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0329 10:20:23.009482    3917 api_server.go:102] status: https://127.0.0.1:52218/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0329 10:20:23.504648    3917 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52218/healthz ...
	I0329 10:20:23.511215    3917 api_server.go:266] https://127.0.0.1:52218/healthz returned 200:
	ok
	I0329 10:20:23.518107    3917 api_server.go:140] control plane version: v1.23.5
	I0329 10:20:23.518114    3917 api_server.go:130] duration metric: took 1.693258713s to wait for apiserver health ...
	I0329 10:20:23.518119    3917 cni.go:93] Creating CNI manager for ""
	I0329 10:20:23.518122    3917 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0329 10:20:23.518127    3917 system_pods.go:43] waiting for kube-system pods to appear ...
	I0329 10:20:23.524875    3917 system_pods.go:59] 7 kube-system pods found
	I0329 10:20:23.524884    3917 system_pods.go:61] "coredns-64897985d-cgqv9" [03cb64c9-c89a-4a7b-a3b0-6f7fd6a0f97d] Running
	I0329 10:20:23.524887    3917 system_pods.go:61] "etcd-functional-20220329101744-2053" [76e54b07-2a89-4ecb-a69b-3d563297d112] Running
	I0329 10:20:23.524889    3917 system_pods.go:61] "kube-apiserver-functional-20220329101744-2053" [308b23e0-a233-415b-9aa5-0961e612689b] Running
	I0329 10:20:23.524891    3917 system_pods.go:61] "kube-controller-manager-functional-20220329101744-2053" [07302e0e-8132-492a-a66f-2f1888edee04] Running
	I0329 10:20:23.524893    3917 system_pods.go:61] "kube-proxy-gwwvl" [f9c4335e-3686-4617-91d5-454f38d2a099] Running
	I0329 10:20:23.524895    3917 system_pods.go:61] "kube-scheduler-functional-20220329101744-2053" [bb5ed99c-3350-49e8-b4a3-d53f4bfb7496] Running
	I0329 10:20:23.524896    3917 system_pods.go:61] "storage-provisioner" [e52ce334-96a3-4dfb-9e1d-9977c6a2572d] Running
	I0329 10:20:23.524898    3917 system_pods.go:74] duration metric: took 6.769126ms to wait for pod list to return data ...
	I0329 10:20:23.524903    3917 node_conditions.go:102] verifying NodePressure condition ...
	I0329 10:20:23.527767    3917 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0329 10:20:23.527778    3917 node_conditions.go:123] node cpu capacity is 6
	I0329 10:20:23.527785    3917 node_conditions.go:105] duration metric: took 2.88008ms to run NodePressure ...
	I0329 10:20:23.527792    3917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0329 10:20:23.701960    3917 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0329 10:20:23.706132    3917 kubeadm.go:752] kubelet initialised
	I0329 10:20:23.706138    3917 kubeadm.go:753] duration metric: took 4.169507ms waiting for restarted kubelet to initialise ...
	I0329 10:20:23.706144    3917 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0329 10:20:23.711622    3917 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-cgqv9" in "kube-system" namespace to be "Ready" ...
	I0329 10:20:23.716941    3917 pod_ready.go:97] node "functional-20220329101744-2053" hosting pod "coredns-64897985d-cgqv9" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	I0329 10:20:23.716949    3917 pod_ready.go:81] duration metric: took 5.309906ms waiting for pod "coredns-64897985d-cgqv9" in "kube-system" namespace to be "Ready" ...
	E0329 10:20:23.716953    3917 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220329101744-2053" hosting pod "coredns-64897985d-cgqv9" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	I0329 10:20:23.716961    3917 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-20220329101744-2053" in "kube-system" namespace to be "Ready" ...
	I0329 10:20:23.721434    3917 pod_ready.go:97] node "functional-20220329101744-2053" hosting pod "etcd-functional-20220329101744-2053" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	I0329 10:20:23.721439    3917 pod_ready.go:81] duration metric: took 4.471971ms waiting for pod "etcd-functional-20220329101744-2053" in "kube-system" namespace to be "Ready" ...
	E0329 10:20:23.721446    3917 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220329101744-2053" hosting pod "etcd-functional-20220329101744-2053" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	I0329 10:20:23.721455    3917 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-20220329101744-2053" in "kube-system" namespace to be "Ready" ...
	I0329 10:20:23.725925    3917 pod_ready.go:97] node "functional-20220329101744-2053" hosting pod "kube-apiserver-functional-20220329101744-2053" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	I0329 10:20:23.725932    3917 pod_ready.go:81] duration metric: took 4.473517ms waiting for pod "kube-apiserver-functional-20220329101744-2053" in "kube-system" namespace to be "Ready" ...
	E0329 10:20:23.725937    3917 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220329101744-2053" hosting pod "kube-apiserver-functional-20220329101744-2053" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	I0329 10:20:23.725944    3917 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-20220329101744-2053" in "kube-system" namespace to be "Ready" ...
	I0329 10:20:23.926377    3917 pod_ready.go:97] node "functional-20220329101744-2053" hosting pod "kube-controller-manager-functional-20220329101744-2053" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	I0329 10:20:23.926383    3917 pod_ready.go:81] duration metric: took 200.436041ms waiting for pod "kube-controller-manager-functional-20220329101744-2053" in "kube-system" namespace to be "Ready" ...
	E0329 10:20:23.926389    3917 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220329101744-2053" hosting pod "kube-controller-manager-functional-20220329101744-2053" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	I0329 10:20:23.926398    3917 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gwwvl" in "kube-system" namespace to be "Ready" ...
	I0329 10:20:24.326778    3917 pod_ready.go:97] node "functional-20220329101744-2053" hosting pod "kube-proxy-gwwvl" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	I0329 10:20:24.326795    3917 pod_ready.go:81] duration metric: took 400.391077ms waiting for pod "kube-proxy-gwwvl" in "kube-system" namespace to be "Ready" ...
	E0329 10:20:24.326799    3917 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220329101744-2053" hosting pod "kube-proxy-gwwvl" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	I0329 10:20:24.326809    3917 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-20220329101744-2053" in "kube-system" namespace to be "Ready" ...
	I0329 10:20:24.726758    3917 pod_ready.go:97] node "functional-20220329101744-2053" hosting pod "kube-scheduler-functional-20220329101744-2053" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	I0329 10:20:24.726770    3917 pod_ready.go:81] duration metric: took 399.955353ms waiting for pod "kube-scheduler-functional-20220329101744-2053" in "kube-system" namespace to be "Ready" ...
	E0329 10:20:24.726776    3917 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220329101744-2053" hosting pod "kube-scheduler-functional-20220329101744-2053" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	I0329 10:20:24.726786    3917 pod_ready.go:38] duration metric: took 1.020638246s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0329 10:20:24.726797    3917 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0329 10:20:24.734424    3917 ops.go:34] apiserver oom_adj: -16
	I0329 10:20:24.734429    3917 kubeadm.go:605] restartCluster took 10.009969939s
	I0329 10:20:24.734434    3917 kubeadm.go:393] StartCluster complete in 10.046250554s
	I0329 10:20:24.734446    3917 settings.go:142] acquiring lock: {Name:mk5b01a4191281d3f224b52386a90714bd22cc72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 10:20:24.734536    3917 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 10:20:24.734959    3917 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig: {Name:mk7bef67bea8eb326a483bde80a52ac63c137849 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 10:20:24.738066    3917 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "functional-20220329101744-2053" rescaled to 1
	I0329 10:20:24.738096    3917 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0329 10:20:24.738102    3917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0329 10:20:24.738126    3917 addons.go:415] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I0329 10:20:24.817272    3917 out.go:176] * Verifying Kubernetes components...
	I0329 10:20:24.738266    3917 config.go:176] Loaded profile config "functional-20220329101744-2053": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 10:20:24.817320    3917 addons.go:65] Setting storage-provisioner=true in profile "functional-20220329101744-2053"
	I0329 10:20:24.817322    3917 addons.go:65] Setting default-storageclass=true in profile "functional-20220329101744-2053"
	I0329 10:20:24.795231    3917 start.go:757] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0329 10:20:24.817343    3917 addons.go:153] Setting addon storage-provisioner=true in "functional-20220329101744-2053"
	W0329 10:20:24.843464    3917 addons.go:165] addon storage-provisioner should already be in state true
	I0329 10:20:24.817354    3917 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-20220329101744-2053"
	I0329 10:20:24.843491    3917 host.go:66] Checking if "functional-20220329101744-2053" exists ...
	I0329 10:20:24.817357    3917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0329 10:20:24.843889    3917 cli_runner.go:133] Run: docker container inspect functional-20220329101744-2053 --format={{.State.Status}}
	I0329 10:20:24.844085    3917 cli_runner.go:133] Run: docker container inspect functional-20220329101744-2053 --format={{.State.Status}}
	I0329 10:20:24.855943    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:25.048563    3917 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0329 10:20:25.048734    3917 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 10:20:25.048739    3917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0329 10:20:25.048844    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:25.051116    3917 addons.go:153] Setting addon default-storageclass=true in "functional-20220329101744-2053"
	W0329 10:20:25.051130    3917 addons.go:165] addon default-storageclass should already be in state true
	I0329 10:20:25.051152    3917 host.go:66] Checking if "functional-20220329101744-2053" exists ...
	I0329 10:20:25.051634    3917 cli_runner.go:133] Run: docker container inspect functional-20220329101744-2053 --format={{.State.Status}}
	I0329 10:20:25.054920    3917 node_ready.go:35] waiting up to 6m0s for node "functional-20220329101744-2053" to be "Ready" ...
	I0329 10:20:25.194758    3917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/functional-20220329101744-2053/id_rsa Username:docker}
	I0329 10:20:25.194838    3917 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0329 10:20:25.194845    3917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0329 10:20:25.194931    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:25.300543    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 10:20:25.329550    3917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/functional-20220329101744-2053/id_rsa Username:docker}
	I0329 10:20:25.436200    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0329 10:20:26.696351    3917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.395791826s)
	W0329 10:20:26.696368    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:26.696378    3917 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:26.698566    3917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.262353816s)
	W0329 10:20:26.698580    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:26.698589    3917 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:26.976016    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0329 10:20:27.017384    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:27.017396    3917 retry.go:31] will retry after 436.71002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:27.059148    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0329 10:20:27.101134    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:27.101144    3917 retry.go:31] will retry after 351.64282ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:27.453086    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0329 10:20:27.454423    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0329 10:20:27.498290    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:27.498308    3917 retry.go:31] will retry after 520.108592ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W0329 10:20:27.498327    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:27.498335    3917 retry.go:31] will retry after 667.587979ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:28.027104    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0329 10:20:28.067211    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:28.067222    3917 retry.go:31] will retry after 477.256235ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:28.167158    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0329 10:20:28.206742    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:28.206753    3917 retry.go:31] will retry after 553.938121ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:28.553099    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0329 10:20:28.596404    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:28.596417    3917 retry.go:31] will retry after 755.539547ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:28.765889    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0329 10:20:28.807216    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:28.807226    3917 retry.go:31] will retry after 1.013654073s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:29.362175    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0329 10:20:29.403145    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:29.403157    3917 retry.go:31] will retry after 1.927317724s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:29.823317    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0329 10:20:29.870629    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:29.870640    3917 retry.go:31] will retry after 2.493863364s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:31.340690    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0329 10:20:31.382676    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:31.382686    3917 retry.go:31] will retry after 2.033977981s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:32.367274    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0329 10:20:32.406450    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:32.406459    3917 retry.go:31] will retry after 2.507808949s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:33.424295    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0329 10:20:33.465952    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:33.465963    3917 retry.go:31] will retry after 3.494322709s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:34.922359    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0329 10:20:34.965164    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:34.965173    3917 retry.go:31] will retry after 4.138597834s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:36.124259    3917 node_ready.go:53] error getting node "functional-20220329101744-2053": Get "https://127.0.0.1:52218/api/v1/nodes/functional-20220329101744-2053": EOF
	I0329 10:20:36.124270    3917 node_ready.go:38] duration metric: took 11.069351875s waiting for node "functional-20220329101744-2053" to be "Ready" ...
	I0329 10:20:36.151340    3917 out.go:176] 
	W0329 10:20:36.151539    3917 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-20220329101744-2053": Get "https://127.0.0.1:52218/api/v1/nodes/functional-20220329101744-2053": EOF
	W0329 10:20:36.151559    3917 out.go:241] * 
	W0329 10:20:36.152589    3917 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-03-29 17:18:07 UTC, end at Tue 2022-03-29 17:20:37 UTC. --
	Mar 29 17:18:33 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:18:33.672703286Z" level=info msg="Daemon has completed initialization"
	Mar 29 17:18:33 functional-20220329101744-2053 systemd[1]: Started Docker Application Container Engine.
	Mar 29 17:18:33 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:18:33.699444606Z" level=info msg="API listen on [::]:2376"
	Mar 29 17:18:33 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:18:33.701575716Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 29 17:19:17 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:19:17.669672925Z" level=info msg="ignoring event" container=74446a4a8dd647f83997f9a63c0b0f056b723088b263b25c5107951c2fa46550 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:19:17 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:19:17.719546718Z" level=info msg="ignoring event" container=b1aa96243016a906f822a673cbf161738b75c4dbfcf460965614f6339b96617e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:19:38 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:19:38.203019550Z" level=info msg="ignoring event" container=162cb93e1f50ddf67d12dd5968c3916bb1d69057a108411f049db0f61f9d112b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:15 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:15.230935582Z" level=info msg="ignoring event" container=7dbb2ebc006ab7bfb8321a4065ac6928974cf531a262fe4ad6f2b674b1733011 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:15 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:15.305509858Z" level=info msg="ignoring event" container=2c02e68a12d6d161370fd0a0ef4e8a6c9db0270fd3ebe58b2606cfd8f6d97568 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:15 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:15.315909561Z" level=info msg="ignoring event" container=2bcc7f96a25123a4428a6b00fdb2cc4f8b9edb18f8283d06e1f26ba0a8584c1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:15 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:15.316250778Z" level=info msg="ignoring event" container=11c80f833d8dde40a4a1cfd2bc12f6c123a6b761dc9b54697bdc0d8bc62dfe05 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:15 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:15.318540873Z" level=info msg="ignoring event" container=2309664b2dca702eb4a61ee5cc3a10938c265903849b7ab4b898dfd66dfe4793 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:15 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:15.326071255Z" level=info msg="ignoring event" container=11b46fa961b383c50e44a98d457917834f2888d4f2afa66a05c8a8be0a270510 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:15 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:15.326119812Z" level=info msg="ignoring event" container=b4206332236c1d6a93ab1667527093fa9b6fd33e53655b558e7df4c22d6151b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:15 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:15.326824157Z" level=info msg="ignoring event" container=45cc2e7c49ebc3ab5b0410a0712d1406a48a343019ea0fde64a2b5f36de3f105 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:15 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:15.330064372Z" level=info msg="ignoring event" container=de8290e9fe44b9444a7685361f59c2f04408bdd82f6595bca170f08ac06d2355 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:15 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:15.334320487Z" level=info msg="ignoring event" container=d60fd8047ed09c80c628c297df3621a110f11a8f65b12c51e8471096e8a23715 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:15 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:15.403148738Z" level=info msg="ignoring event" container=e94d2bb30a5c39caf6ac8526e3658dfd41e986a21cac5901beaf07739134cd45 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:16 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:16.400075314Z" level=info msg="ignoring event" container=00593e4ce0a97d0563d0eb8e475ef1a97f2e34fbd398585a82ac26d7d4cfc406 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:17 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:17.680664680Z" level=info msg="ignoring event" container=6d4f9784ec462af8f6e1dc7edfeafb806b60e78539983fbfa8169fa4eeb793a4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:20 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:20.139898151Z" level=info msg="ignoring event" container=02b83e91f9c63300d6ab9e272bdeda877068a8f89f3e266ced3e0cc0691d70ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:24 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:24.287143994Z" level=info msg="ignoring event" container=f4f608ccfb3cf51c424690f2e06f5b94e0979d41d2b246729a9d04d919f9b091 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:25 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:25.693241520Z" level=info msg="ignoring event" container=3982a341516303c3a5a471a6c61992a7237e9c099e95f5109fb0b2c4ccfa060b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:25 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:25.727538372Z" level=info msg="ignoring event" container=a72b7badb1cf607845d046f1e34fd04ce7bfcfd561c8e4a78a4affb2cc12ca3b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:25 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:25.780720316Z" level=info msg="ignoring event" container=dae2b195616e4862e4f231515bbba86c37a032df065c2a94d7791ea720fed141 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	dae2b195616e4       3fc1d62d65872       13 seconds ago       Exited              kube-apiserver            1                   b880bdc9e325a
	6df75c7e3990f       a4ca41631cc7a       13 seconds ago       Running             coredns                   1                   629dc99b4434f
	9c35aac5a43a0       6e38f40d628db       14 seconds ago       Running             storage-provisioner       2                   7a7e41db2ffc7
	e1072fa2a0368       884d49d6d8c9f       20 seconds ago       Running             kube-scheduler            1                   0a36befd70d4a
	f36428c4dac6e       25f8c7f3da61c       21 seconds ago       Running             etcd                      1                   4c7cbfe1c073d
	ebc4b70c301ba       b0c9e5e4dbb14       21 seconds ago       Running             kube-controller-manager   1                   50e86ad5d2050
	f6729cadb6286       3c53fa8541f95       22 seconds ago       Running             kube-proxy                1                   a87188636f85f
	7dbb2ebc006ab       6e38f40d628db       About a minute ago   Exited              storage-provisioner       1                   de8290e9fe44b
	02b83e91f9c63       a4ca41631cc7a       About a minute ago   Exited              coredns                   0                   2bcc7f96a2512
	2309664b2dca7       3c53fa8541f95       About a minute ago   Exited              kube-proxy                0                   2c02e68a12d6d
	6d4f9784ec462       884d49d6d8c9f       About a minute ago   Exited              kube-scheduler            0                   b4206332236c1
	e94d2bb30a5c3       b0c9e5e4dbb14       About a minute ago   Exited              kube-controller-manager   0                   11b46fa961b38
	d60fd8047ed09       25f8c7f3da61c       About a minute ago   Exited              etcd                      0                   45cc2e7c49ebc
	
	* 
	* ==> coredns [02b83e91f9c6] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [6df75c7e3990] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	W0329 17:20:25.695667       1 reflector.go:441] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
	W0329 17:20:25.695690       1 reflector.go:441] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
	W0329 17:20:25.695894       1 reflector.go:441] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
	E0329 17:20:26.549780       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=530": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:26.836503       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=530": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:27.046387       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=530": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:28.402204       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=530": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:28.593214       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=530": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:29.128631       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=530": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:33.015602       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=530": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:33.251780       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=530": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:34.397547       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=530": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.034645] bpfilter: read fail 0
	[  +0.028517] bpfilter: read fail 0
	[  +0.028661] bpfilter: read fail 0
	[  +0.028040] bpfilter: write fail -32
	[  +0.024301] bpfilter: write fail -32
	[  +0.040315] bpfilter: write fail -32
	[  +0.026203] bpfilter: write fail -32
	[  +0.037462] bpfilter: read fail 0
	[  +0.035114] bpfilter: write fail -32
	[  +0.029969] bpfilter: write fail -32
	[  +0.030434] bpfilter: read fail 0
	[  +0.032641] bpfilter: write fail -32
	[  +0.031111] bpfilter: write fail -32
	[  +0.026072] bpfilter: write fail -32
	[  +0.028308] bpfilter: read fail 0
	[  +0.026371] bpfilter: write fail -32
	[  +0.028417] bpfilter: write fail -32
	[  +0.027491] bpfilter: write fail -32
	[  +0.033520] bpfilter: write fail -32
	[  +0.038616] bpfilter: write fail -32
	[  +0.039334] bpfilter: write fail -32
	[  +0.031370] bpfilter: read fail 0
	[  +0.028314] bpfilter: read fail 0
	[  +0.038695] bpfilter: read fail 0
	[  +0.031261] bpfilter: read fail 0
	
	* 
	* ==> etcd [d60fd8047ed0] <==
	* {"level":"info","ts":"2022-03-29T17:18:47.703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-03-29T17:18:47.704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-03-29T17:18:47.704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-03-29T17:18:47.704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-29T17:18:47.704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-03-29T17:18:47.704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-29T17:18:47.704Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:18:47.713Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:18:47.713Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-29T17:18:47.713Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:18:47.713Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:18:47.713Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220329101744-2053 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-03-29T17:18:47.713Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-29T17:18:47.713Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-03-29T17:18:47.713Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-03-29T17:18:47.763Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-03-29T17:18:47.763Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-03-29T17:20:15.207Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-03-29T17:20:15.207Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20220329101744-2053","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2022/03/29 17:20:15 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/03/29 17:20:15 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-03-29T17:20:15.217Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2022-03-29T17:20:15.230Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-03-29T17:20:15.231Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-03-29T17:20:15.231Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20220329101744-2053","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> etcd [f36428c4dac6] <==
	* {"level":"info","ts":"2022-03-29T17:20:18.427Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.1","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-03-29T17:20:18.427Z","caller":"etcdserver/server.go:744","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-03-29T17:20:18.427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-03-29T17:20:18.428Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-03-29T17:20:18.428Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:20:18.428Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:20:18.430Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-03-29T17:20:18.430Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-03-29T17:20:18.430Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-03-29T17:20:18.430Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-03-29T17:20:18.430Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-03-29T17:20:20.134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2022-03-29T17:20:20.134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2022-03-29T17:20:20.134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-29T17:20:20.134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2022-03-29T17:20:20.134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2022-03-29T17:20:20.134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2022-03-29T17:20:20.134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2022-03-29T17:20:20.135Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220329101744-2053 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-03-29T17:20:20.135Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-03-29T17:20:20.135Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-03-29T17:20:20.135Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-29T17:20:20.135Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-29T17:20:20.136Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-03-29T17:20:20.136Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  17:20:38 up 9 min,  0 users,  load average: 1.81, 1.79, 1.05
	Linux functional-20220329101744-2053 5.10.25-linuxkit #1 SMP Tue Mar 23 09:27:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [dae2b195616e] <==
	* I0329 17:20:25.690120       1 server.go:565] external host was not specified, using 192.168.49.2
	I0329 17:20:25.690678       1 server.go:172] Version: v1.23.5
	E0329 17:20:25.691027       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	* 
	* ==> kube-controller-manager [e94d2bb30a5c] <==
	* I0329 17:19:04.996654       1 shared_informer.go:247] Caches are synced for PV protection 
	I0329 17:19:04.996663       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0329 17:19:04.997822       1 shared_informer.go:247] Caches are synced for HPA 
	I0329 17:19:04.998545       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0329 17:19:05.003289       1 shared_informer.go:247] Caches are synced for node 
	I0329 17:19:05.003322       1 range_allocator.go:173] Starting range CIDR allocator
	I0329 17:19:05.003325       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0329 17:19:05.003330       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0329 17:19:05.007128       1 range_allocator.go:374] Set node functional-20220329101744-2053 PodCIDR to [10.244.0.0/24]
	I0329 17:19:05.106894       1 shared_informer.go:247] Caches are synced for attach detach 
	I0329 17:19:05.117545       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0329 17:19:05.151020       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0329 17:19:05.196864       1 shared_informer.go:247] Caches are synced for endpoint 
	I0329 17:19:05.203787       1 shared_informer.go:247] Caches are synced for resource quota 
	I0329 17:19:05.205022       1 shared_informer.go:247] Caches are synced for resource quota 
	I0329 17:19:05.402744       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0329 17:19:05.618941       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0329 17:19:05.681966       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0329 17:19:05.682059       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0329 17:19:05.702153       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gwwvl"
	I0329 17:19:05.899693       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-q6b25"
	I0329 17:19:05.904115       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-cgqv9"
	I0329 17:19:05.950596       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0329 17:19:05.953936       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-q6b25"
	I0329 17:19:09.950763       1 node_lifecycle_controller.go:1190] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-controller-manager [ebc4b70c301b] <==
	* I0329 17:20:23.826669       1 controllermanager.go:605] Started "ephemeral-volume"
	I0329 17:20:23.826722       1 controller.go:170] Starting ephemeral volume controller
	I0329 17:20:23.826728       1 shared_informer.go:240] Waiting for caches to sync for ephemeral
	I0329 17:20:23.828056       1 controllermanager.go:605] Started "persistentvolume-binder"
	I0329 17:20:23.828213       1 pv_controller_base.go:310] Starting persistent volume controller
	I0329 17:20:23.828239       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
	I0329 17:20:23.829445       1 controllermanager.go:605] Started "clusterrole-aggregation"
	I0329 17:20:23.829456       1 clusterroleaggregation_controller.go:194] Starting ClusterRoleAggregator
	I0329 17:20:23.829562       1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator
	I0329 17:20:23.830740       1 controllermanager.go:605] Started "cronjob"
	I0329 17:20:23.830852       1 cronjob_controllerv2.go:132] "Starting cronjob controller v2"
	I0329 17:20:23.830860       1 shared_informer.go:240] Waiting for caches to sync for cronjob
	I0329 17:20:23.832570       1 node_lifecycle_controller.go:377] Sending events to api server.
	I0329 17:20:23.832701       1 taint_manager.go:163] "Sending events to api server"
	I0329 17:20:23.832750       1 node_lifecycle_controller.go:505] Controller will reconcile labels.
	I0329 17:20:23.832844       1 controllermanager.go:605] Started "nodelifecycle"
	I0329 17:20:23.832870       1 node_lifecycle_controller.go:539] Starting node controller
	I0329 17:20:23.832876       1 shared_informer.go:240] Waiting for caches to sync for taint
	I0329 17:20:23.834102       1 node_ipam_controller.go:91] Sending events to api server.
	I0329 17:20:23.885715       1 shared_informer.go:247] Caches are synced for tokens 
	W0329 17:20:33.835472       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	W0329 17:20:34.337065       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	W0329 17:20:35.337817       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	W0329 17:20:37.338904       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	E0329 17:20:37.338967       1 cidr_allocator.go:137] Failed to list all nodes: Get "https://192.168.49.2:8441/api/v1/nodes": failed to get token for kube-system/node-controller: timed out waiting for the condition
	
	* 
	* ==> kube-proxy [2309664b2dca] <==
	* I0329 17:19:06.260520       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0329 17:19:06.260597       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0329 17:19:06.260619       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0329 17:19:07.807364       1 server_others.go:206] "Using iptables Proxier"
	I0329 17:19:07.807419       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0329 17:19:07.807430       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0329 17:19:07.807440       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0329 17:19:07.807787       1 server.go:656] "Version info" version="v1.23.5"
	I0329 17:19:07.808221       1 config.go:317] "Starting service config controller"
	I0329 17:19:07.808253       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0329 17:19:07.808439       1 config.go:226] "Starting endpoint slice config controller"
	I0329 17:19:07.808517       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0329 17:19:07.908960       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0329 17:19:07.909011       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [f6729cadb628] <==
	* E0329 17:20:17.459691       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220329101744-2053": dial tcp 192.168.49.2:8441: connect: connection refused
	E0329 17:20:18.456065       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220329101744-2053": dial tcp 192.168.49.2:8441: connect: connection refused
	I0329 17:20:22.096421       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0329 17:20:22.096462       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0329 17:20:22.096504       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0329 17:20:22.412685       1 server_others.go:206] "Using iptables Proxier"
	I0329 17:20:22.412732       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0329 17:20:22.412741       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0329 17:20:22.412761       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0329 17:20:22.413164       1 server.go:656] "Version info" version="v1.23.5"
	I0329 17:20:22.413903       1 config.go:317] "Starting service config controller"
	I0329 17:20:22.414507       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0329 17:20:22.414468       1 config.go:226] "Starting endpoint slice config controller"
	I0329 17:20:22.414545       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0329 17:20:22.514588       1 shared_informer.go:247] Caches are synced for service config 
	I0329 17:20:22.514609       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [6d4f9784ec46] <==
	* E0329 17:18:50.056060       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0329 17:18:50.056055       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0329 17:18:50.056074       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0329 17:18:50.056430       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0329 17:18:50.056463       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0329 17:18:50.057266       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0329 17:18:50.057315       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0329 17:18:50.057551       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0329 17:18:50.057593       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0329 17:18:50.057694       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0329 17:18:50.057738       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0329 17:18:50.057851       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0329 17:18:50.057880       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0329 17:18:50.058276       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0329 17:18:50.058306       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0329 17:18:50.907618       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0329 17:18:50.907679       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0329 17:18:51.053725       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0329 17:18:51.053821       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0329 17:18:51.197722       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0329 17:18:51.197756       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0329 17:18:51.451537       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0329 17:20:15.147545       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	I0329 17:20:15.147629       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0329 17:20:15.147735       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kube-scheduler [e1072fa2a036] <==
	* I0329 17:20:19.411428       1 serving.go:348] Generated self-signed cert in-memory
	W0329 17:20:22.013102       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0329 17:20:22.013136       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0329 17:20:22.013143       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0329 17:20:22.013148       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0329 17:20:22.084126       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.5"
	I0329 17:20:22.085475       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0329 17:20:22.085595       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0329 17:20:22.085621       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0329 17:20:22.085698       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0329 17:20:22.186232       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-03-29 17:18:07 UTC, end at Tue 2022-03-29 17:20:39 UTC. --
	Mar 29 17:20:32 functional-20220329101744-2053 kubelet[5770]: E0329 17:20:32.335298    5770 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20220329101744-2053?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Mar 29 17:20:32 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:32.373970    5770 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Mar 29 17:20:32 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:32.375213    5770 status_manager.go:604] "Failed to get status for pod" podUID=03cb64c9-c89a-4a7b-a3b0-6f7fd6a0f97d pod="kube-system/coredns-64897985d-cgqv9" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-64897985d-cgqv9\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:32 functional-20220329101744-2053 kubelet[5770]: E0329 17:20:32.520790    5770 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"functional-20220329101744-2053\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220329101744-2053?resourceVersion=0&timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:32 functional-20220329101744-2053 kubelet[5770]: E0329 17:20:32.521442    5770 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"functional-20220329101744-2053\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220329101744-2053?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:32 functional-20220329101744-2053 kubelet[5770]: E0329 17:20:32.521919    5770 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"functional-20220329101744-2053\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220329101744-2053?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:32 functional-20220329101744-2053 kubelet[5770]: E0329 17:20:32.522380    5770 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"functional-20220329101744-2053\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220329101744-2053?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:32 functional-20220329101744-2053 kubelet[5770]: E0329 17:20:32.522666    5770 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"functional-20220329101744-2053\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220329101744-2053?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:32 functional-20220329101744-2053 kubelet[5770]: E0329 17:20:32.522712    5770 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count"
	Mar 29 17:20:32 functional-20220329101744-2053 kubelet[5770]: E0329 17:20:32.536897    5770 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20220329101744-2053?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Mar 29 17:20:32 functional-20220329101744-2053 kubelet[5770]: E0329 17:20:32.938118    5770 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20220329101744-2053?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Mar 29 17:20:33 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:33.584039    5770 status_manager.go:604] "Failed to get status for pod" podUID=d587f4ebedc48a236c0a6b24627b00bf pod="kube-system/kube-apiserver-functional-20220329101744-2053" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-20220329101744-2053\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:33 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:33.584620    5770 status_manager.go:604] "Failed to get status for pod" podUID=18e40e400faed752d348d0030ea8fe2a pod="kube-system/kube-controller-manager-functional-20220329101744-2053" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-20220329101744-2053\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:33 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:33.584871    5770 status_manager.go:604] "Failed to get status for pod" podUID=c27d44dd832d3b9434b363863b8bf820 pod="kube-system/kube-scheduler-functional-20220329101744-2053" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20220329101744-2053\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:33 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:33.585131    5770 status_manager.go:604] "Failed to get status for pod" podUID=f9c4335e-3686-4617-91d5-454f38d2a099 pod="kube-system/kube-proxy-gwwvl" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-gwwvl\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:33 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:33.585517    5770 status_manager.go:604] "Failed to get status for pod" podUID=03cb64c9-c89a-4a7b-a3b0-6f7fd6a0f97d pod="kube-system/coredns-64897985d-cgqv9" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-64897985d-cgqv9\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:33 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:33.585875    5770 status_manager.go:604] "Failed to get status for pod" podUID=e52ce334-96a3-4dfb-9e1d-9977c6a2572d pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:33 functional-20220329101744-2053 kubelet[5770]: E0329 17:20:33.739647    5770 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20220329101744-2053?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Mar 29 17:20:35 functional-20220329101744-2053 kubelet[5770]: E0329 17:20:35.340562    5770 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20220329101744-2053?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Mar 29 17:20:36 functional-20220329101744-2053 kubelet[5770]: E0329 17:20:36.029105    5770 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-functional-20220329101744-2053.16e0e9b4ac1fbff0", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-functional-20220329101744-2053", UID:"d587f4ebedc48a236c0a6b24627b00bf", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"BackOff", Message:"Back-off restarting failed
container", Source:v1.EventSource{Component:"kubelet", Host:"functional-20220329101744-2053"}, FirstTimestamp:time.Date(2022, time.March, 29, 17, 20, 25, 808748528, time.Local), LastTimestamp:time.Date(2022, time.March, 29, 17, 20, 25, 808748528, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events": dial tcp 192.168.49.2:8441: connect: connection refused'(may retry after sleeping)
	Mar 29 17:20:36 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:36.802788    5770 status_manager.go:604] "Failed to get status for pod" podUID=c27d44dd832d3b9434b363863b8bf820 pod="kube-system/kube-scheduler-functional-20220329101744-2053" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20220329101744-2053\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:36 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:36.803588    5770 status_manager.go:604] "Failed to get status for pod" podUID=c27d44dd832d3b9434b363863b8bf820 pod="kube-system/kube-scheduler-functional-20220329101744-2053" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20220329101744-2053\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:37 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:37.606112    5770 status_manager.go:604] "Failed to get status for pod" podUID=1a5661e220be08aa9553aa0cddca9ccc pod="kube-system/etcd-functional-20220329101744-2053" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-20220329101744-2053\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:37 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:37.606376    5770 status_manager.go:604] "Failed to get status for pod" podUID=1a5661e220be08aa9553aa0cddca9ccc pod="kube-system/etcd-functional-20220329101744-2053" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-20220329101744-2053\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:38 functional-20220329101744-2053 kubelet[5770]: E0329 17:20:38.541788    5770 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20220329101744-2053?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	
	* 
	* ==> storage-provisioner [7dbb2ebc006a] <==
	* I0329 17:19:38.830112       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0329 17:19:38.837326       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0329 17:19:38.837369       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0329 17:19:38.851718       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0329 17:19:38.851883       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20220329101744-2053_98a5b9cb-6f01-47bb-9d87-599bd5057270!
	I0329 17:19:38.852345       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"65681de1-eb40-4bef-8ff2-9df41038f287", APIVersion:"v1", ResourceVersion:"497", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20220329101744-2053_98a5b9cb-6f01-47bb-9d87-599bd5057270 became leader
	I0329 17:19:38.952178       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20220329101744-2053_98a5b9cb-6f01-47bb-9d87-599bd5057270!
	
	* 
	* ==> storage-provisioner [9c35aac5a43a] <==
	* I0329 17:20:24.682433       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0329 17:20:24.690434       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0329 17:20:24.690459       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0329 17:20:28.149045       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:32.407796       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:36.003934       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:39.058308       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0329 10:20:38.394282    4068 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8441 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-20220329101744-2053 -n functional-20220329101744-2053
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-20220329101744-2053 -n functional-20220329101744-2053: exit status 2 (623.131081ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:257: "functional-20220329101744-2053" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (30.80s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (11.24s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:807: (dbg) Run:  kubectl --context functional-20220329101744-2053 get po -l tier=control-plane -n kube-system -o=json
E0329 10:20:45.417718    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
functional_test.go:807: (dbg) Done: kubectl --context functional-20220329101744-2053 get po -l tier=control-plane -n kube-system -o=json: (5.948605084s)
functional_test.go:822: etcd phase: Running
functional_test.go:830: etcd is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2022-03-29 10:20:23 -0700 PDT ContainerStatuses:[{Name:etcd State:{Waiting:<nil> Running:0xc0005f8768 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc000422000} Ready:false RestartCount:1 Image:k8s.gcr.io/etcd:3.5.1-0 ImageID:docker-pullable://k8s.gcr.io/etcd@sha256:64b9ea357325d5db9f8a723dcf503b5a449177b17ac87d69481e126bb724c263 ContainerID:docker://f36428c4dac6e02d6676cb751908497e242a7745cb94a40724817258e6ca7a14}]}
functional_test.go:822: kube-apiserver phase: Pending
functional_test.go:824: kube-apiserver is not Running: {Phase:Pending Conditions:[] Message: Reason: HostIP: PodIP: StartTime:<nil> ContainerStatuses:[]}
functional_test.go:822: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager is not Ready: {Phase:Running Conditions:[{Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2022-03-29 10:20:23 -0700 PDT ContainerStatuses:[{Name:kube-controller-manager State:{Waiting:<nil> Running:0xc0009a8048 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc000422070} Ready:false RestartCount:1 Image:k8s.gcr.io/kube-controller-manager:v1.23.5 ImageID:docker-pullable://k8s.gcr.io/kube-controller-manager@sha256:cca0fb3532abedcc95c5f64268d54da9ecc56cc4817ff08d0128941cf2b0e1a4 ContainerID:docker://ebc4b70c301ba3794f89546f868f7e1ce62e1ba1f9483bc8fa11405950edb290}]}
functional_test.go:822: kube-scheduler phase: Running
functional_test.go:832: kube-scheduler status: Ready
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect functional-20220329101744-2053
helpers_test.go:236: (dbg) docker inspect functional-20220329101744-2053:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ea38e6ed40e80ad877ab8f305094fe2a1bd1281cbb276da027659cb556a90c0c",
	        "Created": "2022-03-29T17:17:56.88045748Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 32865,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-03-29T17:18:05.331720759Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/ea38e6ed40e80ad877ab8f305094fe2a1bd1281cbb276da027659cb556a90c0c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ea38e6ed40e80ad877ab8f305094fe2a1bd1281cbb276da027659cb556a90c0c/hostname",
	        "HostsPath": "/var/lib/docker/containers/ea38e6ed40e80ad877ab8f305094fe2a1bd1281cbb276da027659cb556a90c0c/hosts",
	        "LogPath": "/var/lib/docker/containers/ea38e6ed40e80ad877ab8f305094fe2a1bd1281cbb276da027659cb556a90c0c/ea38e6ed40e80ad877ab8f305094fe2a1bd1281cbb276da027659cb556a90c0c-json.log",
	        "Name": "/functional-20220329101744-2053",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20220329101744-2053:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20220329101744-2053",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c0192e98a580d706dfce60f37c1739397d94b837e28279c9cea240224752ae81-init/diff:/var/lib/docker/overlay2/ed8311db7d03c646788fa108908d5a1d8dddbaed01ecc14e796262809d634ef8/diff:/var/lib/docker/overlay2/fffad9e78bb3ec7e8b5ea36a3468bf8ad790e12f5aed873eb38fef827d7819fe/diff:/var/lib/docker/overlay2/0884cb8dd1433ab9b10af09dfc2db455976180eea4d3e98f437636fe78827973/diff:/var/lib/docker/overlay2/ad6230d3d71ac6d03ec77bba7173b167ee5a5699e7c23d8c189916cda703f311/diff:/var/lib/docker/overlay2/0702d3d4f17839407c1ee9378a34037e4f600920369a546685de73c8539c346d/diff:/var/lib/docker/overlay2/247b25a75047a5666eb37c7cff671c04ce71a5bc50a9956dd60897c584fc1f59/diff:/var/lib/docker/overlay2/837412f364b27df35b5665a99ab6cd231e8de69bae0f73bb28bad6ed0dd128f5/diff:/var/lib/docker/overlay2/531154dc6a26469f67410ef43d135306a3eedbc67f74a21a32538d617c77c318/diff:/var/lib/docker/overlay2/229d67a6594733bc5019727a0811252250289d2f61510cedaa4ead33d70330ab/diff:/var/lib/docker/overlay2/f7b1fb
eba30fb183ce01e8abaf7f8e26555ae1fa899e3dc78c646c25860907e7/diff:/var/lib/docker/overlay2/01f923b173dbf278a094561e0671bf034644bb4c2a4bda564053248bddd1cded/diff:/var/lib/docker/overlay2/912d5b7621231c6a8cbe746efb19c4efdc43b3e5fb88cc798251161aad015dd1/diff:/var/lib/docker/overlay2/d78bba14768b6a4768f26b406b68719bf1f0529762c5414a2e66c102bb2dbe71/diff:/var/lib/docker/overlay2/ffef64e071881a1b5c0da519ac56ce3ed67d288e8acf9cb89dbff58600113528/diff:/var/lib/docker/overlay2/aac4aad89acc5de8af0000e113f40a2b16664b088b5a11558cc39799b58a1c2b/diff:/var/lib/docker/overlay2/e762af91a33bb52ea1c8b1cd14b715553facd0cfc6df4b0c95c127811722532f/diff:/var/lib/docker/overlay2/cb812104812b7b9f9f0fb2017da3ddc5d4dcc14449d0487fc9c5c415525295ee/diff:/var/lib/docker/overlay2/37f2490d17e9c4953ec715b1612c0e27aaec35650d51458e1236463dd4a58b6e/diff:/var/lib/docker/overlay2/40da79e6dfd2a9d937de1adbf1ea8003dbc69aeea53d384c6b6054e1b6bd5175/diff:/var/lib/docker/overlay2/dabd87cf915b8c6844c9e01839884ea36a80ac63b4a0b48807ae1c7e4d50880f/diff:/var/lib/d
ocker/overlay2/52f54f0a2ef32e65e13e2d0291eab6d777c705dab895c2756886dfe506770b67/diff:/var/lib/docker/overlay2/96e29a8e30ef05c20455a905506a683c59782040ec9e915b8ce7a1e350a1ca15/diff:/var/lib/docker/overlay2/d13cd5bc75428f331da47ddc7278e354da3e5bf55a2865a3adcb5a2783d20ece/diff:/var/lib/docker/overlay2/9c2d77ccb10db700ff80711d5423a863d697548e98169f21dd8771fe5295a701/diff:/var/lib/docker/overlay2/76716609e1be01b86880eab934cfb80ef7e4b8aebfb34e3f0a319c7749f2c0a6/diff:/var/lib/docker/overlay2/1e437f9ef5e019af60a5e498295b2bbc9912f077fc593b09eb222057079fedf8/diff:/var/lib/docker/overlay2/63e12553dff4fbd6ccd84c18767825ca3205f216fec3abf3996238c658aed421/diff:/var/lib/docker/overlay2/3855d310128c2ce844b1c4eec4376b6508c7d9ebde1c06803b3d4a699b47d7a2/diff:/var/lib/docker/overlay2/bbde0e6b887250e86f09f8e916dbc892e9c55c55145dcd05872c2a429dd6688c/diff:/var/lib/docker/overlay2/77097752f9b18de746139649a4f9cf5778c6a556501cdeae76b30c84102a2ad4/diff:/var/lib/docker/overlay2/c81fc30e11f320f4767cd26e30359508e11c5abcf3dea34ce5a66eef8e5
fafe6/diff:/var/lib/docker/overlay2/4ea9241abe44119b374b7e129cca5b4fe04d087ca36c3b57967c69eb724c6c81/diff:/var/lib/docker/overlay2/606560f69d52a56c66502f40fecef53f0f79cd1f4f513169ed1203b66db79b5f/diff:/var/lib/docker/overlay2/6712571c65e7f0f6617d12edd03e18393d7353546adec47601cd6474de23f21d/diff:/var/lib/docker/overlay2/13c973edc7a5a46c7a8d3e2b9176907b3e98b21624bd35bd83da411699f16e4e/diff:/var/lib/docker/overlay2/030befada2a963170d65bb2a31ec0ec42eba1ff1bb8aba8240cbf840d7d3d371/diff:/var/lib/docker/overlay2/ac9d59d6e110e5187326f8e7cb3b7a2ccd103fecc01cc685430e655ee9e65443/diff:/var/lib/docker/overlay2/57e60640eba0ca057651f8e9237e8cf8463274e500717b34328689375e822439/diff:/var/lib/docker/overlay2/aab485f0e5d1476b141999faa273823cd68233c08265b68d5a8ed0ae024b00af/diff:/var/lib/docker/overlay2/f7ba5605eda2c32092b20df56b38d4b4087611c36717abedaf45e38c4f5772a7/diff:/var/lib/docker/overlay2/a880e812dfcc4e2d109077663acc43d91ae2f9fb6aafa9778fcc4ea35b2bd270/diff:/var/lib/docker/overlay2/74d9ed5ae5bcc9f703b641f24730f252bfe3bf
25347ddb81d302b2a68396b787/diff:/var/lib/docker/overlay2/fc3eaabab34142464de90a97b9b7a9b2eb1a4f4a0c9c47d674162d116597e1cb/diff:/var/lib/docker/overlay2/925393f6e25c9f479452aa9b73178021bd52b4995e89da06dcaf12b58e6a738e/diff:/var/lib/docker/overlay2/abbcdd25ad33e602512274bdca503a41f0ad5acecd3d040e23bb86b8d0b7ac67/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c0192e98a580d706dfce60f37c1739397d94b837e28279c9cea240224752ae81/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c0192e98a580d706dfce60f37c1739397d94b837e28279c9cea240224752ae81/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c0192e98a580d706dfce60f37c1739397d94b837e28279c9cea240224752ae81/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20220329101744-2053",
	                "Source": "/var/lib/docker/volumes/functional-20220329101744-2053/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20220329101744-2053",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20220329101744-2053",
	                "name.minikube.sigs.k8s.io": "functional-20220329101744-2053",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c714e37f4bf7b39a979c22e4b123cc0bfe6beaa1d68fa2f2f3d4a22cbeca9452",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52219"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52215"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52216"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52217"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52218"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c714e37f4bf7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20220329101744-2053": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ea38e6ed40e8",
	                        "functional-20220329101744-2053"
	                    ],
	                    "NetworkID": "ee1e6efd28dce3238bcf209dff9dd4541068959a153a07c92ad2417dcb04ff76",
	                    "EndpointID": "5ff0d6b4aeaac202ef9f262a0af7ff981c5fa7b8eeb1a475ea2fc338ea6c0186",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20220329101744-2053 -n functional-20220329101744-2053
helpers_test.go:240: (dbg) Done: out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20220329101744-2053 -n functional-20220329101744-2053: (1.305578144s)
helpers_test.go:245: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220329101744-2053 logs -n 25: (3.089220687s)
helpers_test.go:253: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------|--------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                                    Args                                     |            Profile             |  User   | Version |          Start Time           |           End Time            |
	|---------|-----------------------------------------------------------------------------|--------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | nospam-20220329101558-2053 --log_dir                                        | nospam-20220329101558-2053     | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:17:17 PDT | Tue, 29 Mar 2022 10:17:18 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 |                                |         |         |                               |                               |
	|         | unpause                                                                     |                                |         |         |                               |                               |
	| -p      | nospam-20220329101558-2053 --log_dir                                        | nospam-20220329101558-2053     | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:17:18 PDT | Tue, 29 Mar 2022 10:17:19 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 |                                |         |         |                               |                               |
	|         | unpause                                                                     |                                |         |         |                               |                               |
	| -p      | nospam-20220329101558-2053 --log_dir                                        | nospam-20220329101558-2053     | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:17:19 PDT | Tue, 29 Mar 2022 10:17:19 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 |                                |         |         |                               |                               |
	|         | unpause                                                                     |                                |         |         |                               |                               |
	| -p      | nospam-20220329101558-2053 --log_dir                                        | nospam-20220329101558-2053     | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:17:20 PDT | Tue, 29 Mar 2022 10:17:37 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 |                                |         |         |                               |                               |
	|         | stop                                                                        |                                |         |         |                               |                               |
	| -p      | nospam-20220329101558-2053 --log_dir                                        | nospam-20220329101558-2053     | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:17:37 PDT | Tue, 29 Mar 2022 10:17:37 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 |                                |         |         |                               |                               |
	|         | stop                                                                        |                                |         |         |                               |                               |
	| -p      | nospam-20220329101558-2053 --log_dir                                        | nospam-20220329101558-2053     | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:17:37 PDT | Tue, 29 Mar 2022 10:17:38 PDT |
	|         | /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 |                                |         |         |                               |                               |
	|         | stop                                                                        |                                |         |         |                               |                               |
	| delete  | -p nospam-20220329101558-2053                                               | nospam-20220329101558-2053     | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:17:38 PDT | Tue, 29 Mar 2022 10:17:44 PDT |
	| start   | -p                                                                          | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:17:44 PDT | Tue, 29 Mar 2022 10:19:47 PDT |
	|         | functional-20220329101744-2053                                              |                                |         |         |                               |                               |
	|         | --memory=4000                                                               |                                |         |         |                               |                               |
	|         | --apiserver-port=8441                                                       |                                |         |         |                               |                               |
	|         | --wait=all --driver=docker                                                  |                                |         |         |                               |                               |
	| start   | -p                                                                          | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:19:47 PDT | Tue, 29 Mar 2022 10:19:54 PDT |
	|         | functional-20220329101744-2053                                              |                                |         |         |                               |                               |
	|         | --alsologtostderr -v=8                                                      |                                |         |         |                               |                               |
	| -p      | functional-20220329101744-2053                                              | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:19:56 PDT | Tue, 29 Mar 2022 10:19:58 PDT |
	|         | cache add k8s.gcr.io/pause:3.1                                              |                                |         |         |                               |                               |
	| -p      | functional-20220329101744-2053                                              | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:19:58 PDT | Tue, 29 Mar 2022 10:20:00 PDT |
	|         | cache add k8s.gcr.io/pause:3.3                                              |                                |         |         |                               |                               |
	| -p      | functional-20220329101744-2053                                              | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:00 PDT | Tue, 29 Mar 2022 10:20:02 PDT |
	|         | cache add                                                                   |                                |         |         |                               |                               |
	|         | k8s.gcr.io/pause:latest                                                     |                                |         |         |                               |                               |
	| -p      | functional-20220329101744-2053 cache add                                    | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:02 PDT | Tue, 29 Mar 2022 10:20:04 PDT |
	|         | minikube-local-cache-test:functional-20220329101744-2053                    |                                |         |         |                               |                               |
	| -p      | functional-20220329101744-2053 cache delete                                 | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:04 PDT | Tue, 29 Mar 2022 10:20:04 PDT |
	|         | minikube-local-cache-test:functional-20220329101744-2053                    |                                |         |         |                               |                               |
	| cache   | delete k8s.gcr.io/pause:3.3                                                 | minikube                       | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:04 PDT | Tue, 29 Mar 2022 10:20:04 PDT |
	| cache   | list                                                                        | minikube                       | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:04 PDT | Tue, 29 Mar 2022 10:20:04 PDT |
	| -p      | functional-20220329101744-2053                                              | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:04 PDT | Tue, 29 Mar 2022 10:20:05 PDT |
	|         | ssh sudo crictl images                                                      |                                |         |         |                               |                               |
	| -p      | functional-20220329101744-2053                                              | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:05 PDT | Tue, 29 Mar 2022 10:20:05 PDT |
	|         | ssh sudo docker rmi                                                         |                                |         |         |                               |                               |
	|         | k8s.gcr.io/pause:latest                                                     |                                |         |         |                               |                               |
	| -p      | functional-20220329101744-2053                                              | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:06 PDT | Tue, 29 Mar 2022 10:20:07 PDT |
	|         | cache reload                                                                |                                |         |         |                               |                               |
	| -p      | functional-20220329101744-2053                                              | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:07 PDT | Tue, 29 Mar 2022 10:20:08 PDT |
	|         | ssh sudo crictl inspecti                                                    |                                |         |         |                               |                               |
	|         | k8s.gcr.io/pause:latest                                                     |                                |         |         |                               |                               |
	| cache   | delete k8s.gcr.io/pause:3.1                                                 | minikube                       | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:08 PDT | Tue, 29 Mar 2022 10:20:08 PDT |
	| cache   | delete k8s.gcr.io/pause:latest                                              | minikube                       | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:08 PDT | Tue, 29 Mar 2022 10:20:08 PDT |
	| -p      | functional-20220329101744-2053                                              | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:08 PDT | Tue, 29 Mar 2022 10:20:09 PDT |
	|         | kubectl -- --context                                                        |                                |         |         |                               |                               |
	|         | functional-20220329101744-2053                                              |                                |         |         |                               |                               |
	|         | get pods                                                                    |                                |         |         |                               |                               |
	| kubectl | --profile=functional-20220329101744-2053                                    | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:09 PDT | Tue, 29 Mar 2022 10:20:09 PDT |
	|         | -- --context                                                                |                                |         |         |                               |                               |
	|         | functional-20220329101744-2053 get pods                                     |                                |         |         |                               |                               |
	| -p      | functional-20220329101744-2053                                              | functional-20220329101744-2053 | jenkins | v1.25.2 | Tue, 29 Mar 2022 10:20:37 PDT | Tue, 29 Mar 2022 10:20:39 PDT |
	|         | logs -n 25                                                                  |                                |         |         |                               |                               |
	|---------|-----------------------------------------------------------------------------|--------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/29 10:20:09
	Running on machine: 37310
	Binary: Built with gc go1.17.7 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0329 10:20:09.803708    3917 out.go:297] Setting OutFile to fd 1 ...
	I0329 10:20:09.803842    3917 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 10:20:09.803845    3917 out.go:310] Setting ErrFile to fd 2...
	I0329 10:20:09.803848    3917 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 10:20:09.803918    3917 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/bin
	I0329 10:20:09.804161    3917 out.go:304] Setting JSON to false
	I0329 10:20:09.819165    3917 start.go:114] hostinfo: {"hostname":"37310.local","uptime":1184,"bootTime":1648573225,"procs":319,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0329 10:20:09.819244    3917 start.go:122] gopshost.Virtualization returned error: not implemented yet
	I0329 10:20:09.846104    3917 out.go:176] * [functional-20220329101744-2053] minikube v1.25.2 on Darwin 11.2.3
	I0329 10:20:09.846223    3917 notify.go:193] Checking for updates...
	I0329 10:20:09.872138    3917 out.go:176]   - MINIKUBE_LOCATION=13730
	I0329 10:20:09.897704    3917 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 10:20:09.923792    3917 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0329 10:20:09.949763    3917 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0329 10:20:09.975610    3917 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	I0329 10:20:09.975999    3917 config.go:176] Loaded profile config "functional-20220329101744-2053": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 10:20:09.976035    3917 driver.go:346] Setting default libvirt URI to qemu:///system
	I0329 10:20:10.074872    3917 docker.go:137] docker version: linux-20.10.6
	I0329 10:20:10.074995    3917 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 10:20:10.256426    3917 info.go:263] docker info: {ID:4GJZ:5WQJ:PTOH:OBGV:UGLB:2QMR:SRUC:WPW4:I7LT:V2VN:S3VH:GWN3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:53 SystemTime:2022-03-29 17:20:10.199972347 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0329 10:20:10.304949    3917 out.go:176] * Using the docker driver based on existing profile
	I0329 10:20:10.304967    3917 start.go:283] selected driver: docker
	I0329 10:20:10.304971    3917 start.go:800] validating driver "docker" against &{Name:functional-20220329101744-2053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220329101744-2053 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false
volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 10:20:10.305054    3917 start.go:811] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0329 10:20:10.305264    3917 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 10:20:10.489753    3917 info.go:263] docker info: {ID:4GJZ:5WQJ:PTOH:OBGV:UGLB:2QMR:SRUC:WPW4:I7LT:V2VN:S3VH:GWN3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:53 SystemTime:2022-03-29 17:20:10.433765936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0329 10:20:10.491741    3917 start_flags.go:837] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0329 10:20:10.491762    3917 cni.go:93] Creating CNI manager for ""
	I0329 10:20:10.491772    3917 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0329 10:20:10.491783    3917 start_flags.go:306] config:
	{Name:functional-20220329101744-2053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220329101744-2053 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false
volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 10:20:10.540598    3917 out.go:176] * Starting control plane node functional-20220329101744-2053 in cluster functional-20220329101744-2053
	I0329 10:20:10.540656    3917 cache.go:120] Beginning downloading kic base image for docker with docker
	I0329 10:20:10.566384    3917 out.go:176] * Pulling base image ...
	I0329 10:20:10.566452    3917 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 10:20:10.566514    3917 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0329 10:20:10.566523    3917 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0329 10:20:10.566544    3917 cache.go:57] Caching tarball of preloaded images
	I0329 10:20:10.566762    3917 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0329 10:20:10.566782    3917 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0329 10:20:10.567998    3917 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/config.json ...
	I0329 10:20:10.687336    3917 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0329 10:20:10.687355    3917 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0329 10:20:10.687363    3917 cache.go:208] Successfully downloaded all kic artifacts
	I0329 10:20:10.687422    3917 start.go:348] acquiring machines lock for functional-20220329101744-2053: {Name:mk70efe01c61c1665e9dd1baf19d51fbdfc798fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 10:20:10.687511    3917 start.go:352] acquired machines lock for "functional-20220329101744-2053" in 72.401µs
	I0329 10:20:10.687532    3917 start.go:94] Skipping create...Using existing machine configuration
	I0329 10:20:10.687540    3917 fix.go:55] fixHost starting: 
	I0329 10:20:10.687802    3917 cli_runner.go:133] Run: docker container inspect functional-20220329101744-2053 --format={{.State.Status}}
	I0329 10:20:10.804808    3917 fix.go:108] recreateIfNeeded on functional-20220329101744-2053: state=Running err=<nil>
	W0329 10:20:10.804828    3917 fix.go:134] unexpected machine state, will restart: <nil>
	I0329 10:20:10.853326    3917 out.go:176] * Updating the running docker "functional-20220329101744-2053" container ...
	I0329 10:20:10.853351    3917 machine.go:88] provisioning docker machine ...
	I0329 10:20:10.853372    3917 ubuntu.go:169] provisioning hostname "functional-20220329101744-2053"
	I0329 10:20:10.853477    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:10.971089    3917 main.go:130] libmachine: Using SSH client type: native
	I0329 10:20:10.971286    3917 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13a3660] 0x13a6740 <nil>  [] 0s} 127.0.0.1 52219 <nil> <nil>}
	I0329 10:20:10.971301    3917 main.go:130] libmachine: About to run SSH command:
	sudo hostname functional-20220329101744-2053 && echo "functional-20220329101744-2053" | sudo tee /etc/hostname
	I0329 10:20:11.101732    3917 main.go:130] libmachine: SSH cmd err, output: <nil>: functional-20220329101744-2053
	
	I0329 10:20:11.101830    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:11.217800    3917 main.go:130] libmachine: Using SSH client type: native
	I0329 10:20:11.217950    3917 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13a3660] 0x13a6740 <nil>  [] 0s} 127.0.0.1 52219 <nil> <nil>}
	I0329 10:20:11.217962    3917 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-20220329101744-2053' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-20220329101744-2053/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-20220329101744-2053' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0329 10:20:11.338348    3917 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0329 10:20:11.338362    3917 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube}
	I0329 10:20:11.338381    3917 ubuntu.go:177] setting up certificates
	I0329 10:20:11.338391    3917 provision.go:83] configureAuth start
	I0329 10:20:11.338487    3917 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20220329101744-2053
	I0329 10:20:11.454609    3917 provision.go:138] copyHostCerts
	I0329 10:20:11.454717    3917 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem, removing ...
	I0329 10:20:11.454723    3917 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem
	I0329 10:20:11.454828    3917 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem (1123 bytes)
	I0329 10:20:11.455023    3917 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem, removing ...
	I0329 10:20:11.455031    3917 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem
	I0329 10:20:11.455084    3917 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem (1679 bytes)
	I0329 10:20:11.455220    3917 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem, removing ...
	I0329 10:20:11.455223    3917 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem
	I0329 10:20:11.455278    3917 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem (1078 bytes)
	I0329 10:20:11.455394    3917 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem org=jenkins.functional-20220329101744-2053 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-20220329101744-2053]
	I0329 10:20:11.608674    3917 provision.go:172] copyRemoteCerts
	I0329 10:20:11.608734    3917 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0329 10:20:11.608786    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:11.729207    3917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/functional-20220329101744-2053/id_rsa Username:docker}
	I0329 10:20:11.817638    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0329 10:20:11.835172    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0329 10:20:11.853791    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0329 10:20:11.870496    3917 provision.go:86] duration metric: configureAuth took 532.097423ms
	I0329 10:20:11.870505    3917 ubuntu.go:193] setting minikube options for container-runtime
	I0329 10:20:11.870715    3917 config.go:176] Loaded profile config "functional-20220329101744-2053": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 10:20:11.870784    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:11.987282    3917 main.go:130] libmachine: Using SSH client type: native
	I0329 10:20:11.987413    3917 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13a3660] 0x13a6740 <nil>  [] 0s} 127.0.0.1 52219 <nil> <nil>}
	I0329 10:20:11.987418    3917 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0329 10:20:12.110158    3917 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0329 10:20:12.110167    3917 ubuntu.go:71] root file system type: overlay
	I0329 10:20:12.110322    3917 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0329 10:20:12.110415    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:12.227356    3917 main.go:130] libmachine: Using SSH client type: native
	I0329 10:20:12.227498    3917 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13a3660] 0x13a6740 <nil>  [] 0s} 127.0.0.1 52219 <nil> <nil>}
	I0329 10:20:12.227558    3917 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0329 10:20:12.353975    3917 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0329 10:20:12.354079    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:12.470312    3917 main.go:130] libmachine: Using SSH client type: native
	I0329 10:20:12.470442    3917 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13a3660] 0x13a6740 <nil>  [] 0s} 127.0.0.1 52219 <nil> <nil>}
	I0329 10:20:12.470452    3917 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0329 10:20:12.595345    3917 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0329 10:20:12.595354    3917 machine.go:91] provisioned docker machine in 1.74200228s
	I0329 10:20:12.595359    3917 start.go:302] post-start starting for "functional-20220329101744-2053" (driver="docker")
	I0329 10:20:12.595362    3917 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0329 10:20:12.595443    3917 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0329 10:20:12.595500    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:12.710204    3917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/functional-20220329101744-2053/id_rsa Username:docker}
	I0329 10:20:12.797305    3917 ssh_runner.go:195] Run: cat /etc/os-release
	I0329 10:20:12.801221    3917 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0329 10:20:12.801232    3917 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0329 10:20:12.801237    3917 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0329 10:20:12.801239    3917 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0329 10:20:12.801247    3917 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/addons for local assets ...
	I0329 10:20:12.801351    3917 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files for local assets ...
	I0329 10:20:12.801495    3917 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/20532.pem -> 20532.pem in /etc/ssl/certs
	I0329 10:20:12.801654    3917 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/test/nested/copy/2053/hosts -> hosts in /etc/test/nested/copy/2053
	I0329 10:20:12.801713    3917 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2053
	I0329 10:20:12.808899    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/20532.pem --> /etc/ssl/certs/20532.pem (1708 bytes)
	I0329 10:20:12.825855    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/test/nested/copy/2053/hosts --> /etc/test/nested/copy/2053/hosts (40 bytes)
	I0329 10:20:12.843428    3917 start.go:305] post-start completed in 248.055702ms
	I0329 10:20:12.843514    3917 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0329 10:20:12.843574    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:12.957452    3917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/functional-20220329101744-2053/id_rsa Username:docker}
	I0329 10:20:13.041831    3917 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0329 10:20:13.046850    3917 fix.go:57] fixHost completed within 2.359314315s
	I0329 10:20:13.046860    3917 start.go:81] releasing machines lock for "functional-20220329101744-2053", held for 2.359347287s
	I0329 10:20:13.046957    3917 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20220329101744-2053
	I0329 10:20:13.166446    3917 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0329 10:20:13.166452    3917 ssh_runner.go:195] Run: systemctl --version
	I0329 10:20:13.166513    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:13.166529    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:13.293600    3917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/functional-20220329101744-2053/id_rsa Username:docker}
	I0329 10:20:13.293604    3917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/functional-20220329101744-2053/id_rsa Username:docker}
	I0329 10:20:13.476441    3917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0329 10:20:13.487484    3917 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 10:20:13.497103    3917 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0329 10:20:13.497160    3917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0329 10:20:13.506595    3917 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0329 10:20:13.518846    3917 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0329 10:20:13.595391    3917 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0329 10:20:13.670375    3917 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 10:20:13.680244    3917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0329 10:20:13.755866    3917 ssh_runner.go:195] Run: sudo systemctl start docker
	I0329 10:20:13.765969    3917 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 10:20:13.803832    3917 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 10:20:13.868638    3917 out.go:203] * Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	I0329 10:20:13.868840    3917 cli_runner.go:133] Run: docker exec -t functional-20220329101744-2053 dig +short host.docker.internal
	I0329 10:20:14.045443    3917 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0329 10:20:14.045541    3917 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0329 10:20:14.049965    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:14.188954    3917 out.go:176]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0329 10:20:14.189110    3917 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 10:20:14.189279    3917 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0329 10:20:14.221288    3917 docker.go:606] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-20220329101744-2053
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.3
	k8s.gcr.io/pause:3.1
	k8s.gcr.io/pause:latest
	
	-- /stdout --
	I0329 10:20:14.221299    3917 docker.go:537] Images already preloaded, skipping extraction
	I0329 10:20:14.221395    3917 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0329 10:20:14.250989    3917 docker.go:606] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-20220329101744-2053
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.3
	k8s.gcr.io/pause:3.1
	k8s.gcr.io/pause:latest
	
	-- /stdout --
	I0329 10:20:14.251003    3917 cache_images.go:84] Images are preloaded, skipping loading
	I0329 10:20:14.251104    3917 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0329 10:20:14.330755    3917 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0329 10:20:14.330771    3917 cni.go:93] Creating CNI manager for ""
	I0329 10:20:14.330777    3917 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0329 10:20:14.330784    3917 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0329 10:20:14.330797    3917 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-20220329101744-2053 NodeName:functional-20220329101744-2053 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0329 10:20:14.330906    3917 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "functional-20220329101744-2053"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0329 10:20:14.330998    3917 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=functional-20220329101744-2053 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:functional-20220329101744-2053 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0329 10:20:14.331060    3917 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0329 10:20:14.338941    3917 binaries.go:44] Found k8s binaries, skipping transfer
	I0329 10:20:14.339011    3917 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0329 10:20:14.346340    3917 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (356 bytes)
	I0329 10:20:14.359122    3917 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0329 10:20:14.372031    3917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1902 bytes)
	I0329 10:20:14.385069    3917 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0329 10:20:14.388886    3917 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053 for IP: 192.168.49.2
	I0329 10:20:14.389018    3917 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key
	I0329 10:20:14.389068    3917 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key
	I0329 10:20:14.389151    3917 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.key
	I0329 10:20:14.389219    3917 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/apiserver.key.dd3b5fb2
	I0329 10:20:14.389272    3917 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/proxy-client.key
	I0329 10:20:14.389475    3917 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/2053.pem (1338 bytes)
	W0329 10:20:14.389518    3917 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/2053_empty.pem, impossibly tiny 0 bytes
	I0329 10:20:14.389534    3917 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem (1675 bytes)
	I0329 10:20:14.389575    3917 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem (1078 bytes)
	I0329 10:20:14.389611    3917 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem (1123 bytes)
	I0329 10:20:14.389640    3917 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem (1679 bytes)
	I0329 10:20:14.389709    3917 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/20532.pem (1708 bytes)
	I0329 10:20:14.390284    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0329 10:20:14.407520    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0329 10:20:14.424808    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0329 10:20:14.442958    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0329 10:20:14.460282    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0329 10:20:14.481065    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0329 10:20:14.500298    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0329 10:20:14.519050    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0329 10:20:14.536906    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0329 10:20:14.553966    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/2053.pem --> /usr/share/ca-certificates/2053.pem (1338 bytes)
	I0329 10:20:14.571410    3917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/20532.pem --> /usr/share/ca-certificates/20532.pem (1708 bytes)
	I0329 10:20:14.589614    3917 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0329 10:20:14.603333    3917 ssh_runner.go:195] Run: openssl version
	I0329 10:20:14.609100    3917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20532.pem && ln -fs /usr/share/ca-certificates/20532.pem /etc/ssl/certs/20532.pem"
	I0329 10:20:14.616901    3917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20532.pem
	I0329 10:20:14.621145    3917 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 29 17:17 /usr/share/ca-certificates/20532.pem
	I0329 10:20:14.621196    3917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20532.pem
	I0329 10:20:14.627067    3917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20532.pem /etc/ssl/certs/3ec20f2e.0"
	I0329 10:20:14.634666    3917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0329 10:20:14.642793    3917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0329 10:20:14.647552    3917 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 29 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0329 10:20:14.647640    3917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0329 10:20:14.653568    3917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0329 10:20:14.661017    3917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2053.pem && ln -fs /usr/share/ca-certificates/2053.pem /etc/ssl/certs/2053.pem"
	I0329 10:20:14.670246    3917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2053.pem
	I0329 10:20:14.674381    3917 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 29 17:17 /usr/share/ca-certificates/2053.pem
	I0329 10:20:14.674430    3917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2053.pem
	I0329 10:20:14.680092    3917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2053.pem /etc/ssl/certs/51391683.0"
	I0329 10:20:14.688214    3917 kubeadm.go:391] StartCluster: {Name:functional-20220329101744-2053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220329101744-2053 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false
storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 10:20:14.688358    3917 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0329 10:20:14.716654    3917 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0329 10:20:14.724460    3917 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0329 10:20:14.724472    3917 kubeadm.go:601] restartCluster start
	I0329 10:20:14.724523    3917 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0329 10:20:14.731580    3917 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0329 10:20:14.731660    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:14.920891    3917 kubeconfig.go:92] found "functional-20220329101744-2053" server: "https://127.0.0.1:52218"
	I0329 10:20:14.921811    3917 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0329 10:20:14.929694    3917 kubeadm.go:569] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2022-03-29 17:18:36.359627992 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2022-03-29 17:20:14.401322431 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0329 10:20:14.929704    3917 kubeadm.go:1067] stopping kube-system containers ...
	I0329 10:20:14.929774    3917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0329 10:20:14.959374    3917 docker.go:438] Stopping containers: [7dbb2ebc006a 162cb93e1f50 de8290e9fe44 02b83e91f9c6 2bcc7f96a251 2309664b2dca 2c02e68a12d6 6d4f9784ec46 e94d2bb30a5c d60fd8047ed0 00593e4ce0a9 b4206332236c 11b46fa961b3 11c80f833d8d 45cc2e7c49eb]
	I0329 10:20:14.959465    3917 ssh_runner.go:195] Run: docker stop 7dbb2ebc006a 162cb93e1f50 de8290e9fe44 02b83e91f9c6 2bcc7f96a251 2309664b2dca 2c02e68a12d6 6d4f9784ec46 e94d2bb30a5c d60fd8047ed0 00593e4ce0a9 b4206332236c 11b46fa961b3 11c80f833d8d 45cc2e7c49eb
	I0329 10:20:20.175191    3917 ssh_runner.go:235] Completed: docker stop 7dbb2ebc006a 162cb93e1f50 de8290e9fe44 02b83e91f9c6 2bcc7f96a251 2309664b2dca 2c02e68a12d6 6d4f9784ec46 e94d2bb30a5c d60fd8047ed0 00593e4ce0a9 b4206332236c 11b46fa961b3 11c80f833d8d 45cc2e7c49eb: (5.215695554s)
	I0329 10:20:20.175271    3917 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0329 10:20:20.213817    3917 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0329 10:20:20.224229    3917 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Mar 29 17:18 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Mar 29 17:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Mar 29 17:18 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Mar 29 17:18 /etc/kubernetes/scheduler.conf
	
	I0329 10:20:20.224296    3917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0329 10:20:20.234715    3917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0329 10:20:20.243873    3917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0329 10:20:20.252674    3917 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0329 10:20:20.252738    3917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0329 10:20:20.278277    3917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0329 10:20:20.288617    3917 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0329 10:20:20.288677    3917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0329 10:20:20.300580    3917 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0329 10:20:20.308273    3917 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0329 10:20:20.308284    3917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0329 10:20:20.386047    3917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0329 10:20:21.499092    3917 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.11301405s)
	I0329 10:20:21.499107    3917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0329 10:20:21.644328    3917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0329 10:20:21.712599    3917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0329 10:20:21.813832    3917 api_server.go:51] waiting for apiserver process to appear ...
	I0329 10:20:21.813915    3917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0329 10:20:21.824838    3917 api_server.go:71] duration metric: took 11.024686ms to wait for apiserver process to appear ...
	I0329 10:20:21.824854    3917 api_server.go:87] waiting for apiserver healthz status ...
	I0329 10:20:21.824865    3917 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52218/healthz ...
	I0329 10:20:22.003709    3917 api_server.go:266] https://127.0.0.1:52218/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0329 10:20:22.003720    3917 api_server.go:102] status: https://127.0.0.1:52218/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0329 10:20:22.504507    3917 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52218/healthz ...
	I0329 10:20:22.511850    3917 api_server.go:266] https://127.0.0.1:52218/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0329 10:20:22.511859    3917 api_server.go:102] status: https://127.0.0.1:52218/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0329 10:20:23.004168    3917 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52218/healthz ...
	I0329 10:20:23.009465    3917 api_server.go:266] https://127.0.0.1:52218/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0329 10:20:23.009482    3917 api_server.go:102] status: https://127.0.0.1:52218/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0329 10:20:23.504648    3917 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52218/healthz ...
	I0329 10:20:23.511215    3917 api_server.go:266] https://127.0.0.1:52218/healthz returned 200:
	ok
	I0329 10:20:23.518107    3917 api_server.go:140] control plane version: v1.23.5
	I0329 10:20:23.518114    3917 api_server.go:130] duration metric: took 1.693258713s to wait for apiserver health ...
	I0329 10:20:23.518119    3917 cni.go:93] Creating CNI manager for ""
	I0329 10:20:23.518122    3917 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0329 10:20:23.518127    3917 system_pods.go:43] waiting for kube-system pods to appear ...
	I0329 10:20:23.524875    3917 system_pods.go:59] 7 kube-system pods found
	I0329 10:20:23.524884    3917 system_pods.go:61] "coredns-64897985d-cgqv9" [03cb64c9-c89a-4a7b-a3b0-6f7fd6a0f97d] Running
	I0329 10:20:23.524887    3917 system_pods.go:61] "etcd-functional-20220329101744-2053" [76e54b07-2a89-4ecb-a69b-3d563297d112] Running
	I0329 10:20:23.524889    3917 system_pods.go:61] "kube-apiserver-functional-20220329101744-2053" [308b23e0-a233-415b-9aa5-0961e612689b] Running
	I0329 10:20:23.524891    3917 system_pods.go:61] "kube-controller-manager-functional-20220329101744-2053" [07302e0e-8132-492a-a66f-2f1888edee04] Running
	I0329 10:20:23.524893    3917 system_pods.go:61] "kube-proxy-gwwvl" [f9c4335e-3686-4617-91d5-454f38d2a099] Running
	I0329 10:20:23.524895    3917 system_pods.go:61] "kube-scheduler-functional-20220329101744-2053" [bb5ed99c-3350-49e8-b4a3-d53f4bfb7496] Running
	I0329 10:20:23.524896    3917 system_pods.go:61] "storage-provisioner" [e52ce334-96a3-4dfb-9e1d-9977c6a2572d] Running
	I0329 10:20:23.524898    3917 system_pods.go:74] duration metric: took 6.769126ms to wait for pod list to return data ...
	I0329 10:20:23.524903    3917 node_conditions.go:102] verifying NodePressure condition ...
	I0329 10:20:23.527767    3917 node_conditions.go:122] node storage ephemeral capacity is 107077304Ki
	I0329 10:20:23.527778    3917 node_conditions.go:123] node cpu capacity is 6
	I0329 10:20:23.527785    3917 node_conditions.go:105] duration metric: took 2.88008ms to run NodePressure ...
	I0329 10:20:23.527792    3917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0329 10:20:23.701960    3917 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0329 10:20:23.706132    3917 kubeadm.go:752] kubelet initialised
	I0329 10:20:23.706138    3917 kubeadm.go:753] duration metric: took 4.169507ms waiting for restarted kubelet to initialise ...
	I0329 10:20:23.706144    3917 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0329 10:20:23.711622    3917 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-cgqv9" in "kube-system" namespace to be "Ready" ...
	I0329 10:20:23.716941    3917 pod_ready.go:97] node "functional-20220329101744-2053" hosting pod "coredns-64897985d-cgqv9" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	I0329 10:20:23.716949    3917 pod_ready.go:81] duration metric: took 5.309906ms waiting for pod "coredns-64897985d-cgqv9" in "kube-system" namespace to be "Ready" ...
	E0329 10:20:23.716953    3917 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220329101744-2053" hosting pod "coredns-64897985d-cgqv9" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	I0329 10:20:23.716961    3917 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-20220329101744-2053" in "kube-system" namespace to be "Ready" ...
	I0329 10:20:23.721434    3917 pod_ready.go:97] node "functional-20220329101744-2053" hosting pod "etcd-functional-20220329101744-2053" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	I0329 10:20:23.721439    3917 pod_ready.go:81] duration metric: took 4.471971ms waiting for pod "etcd-functional-20220329101744-2053" in "kube-system" namespace to be "Ready" ...
	E0329 10:20:23.721446    3917 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220329101744-2053" hosting pod "etcd-functional-20220329101744-2053" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	I0329 10:20:23.721455    3917 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-20220329101744-2053" in "kube-system" namespace to be "Ready" ...
	I0329 10:20:23.725925    3917 pod_ready.go:97] node "functional-20220329101744-2053" hosting pod "kube-apiserver-functional-20220329101744-2053" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	I0329 10:20:23.725932    3917 pod_ready.go:81] duration metric: took 4.473517ms waiting for pod "kube-apiserver-functional-20220329101744-2053" in "kube-system" namespace to be "Ready" ...
	E0329 10:20:23.725937    3917 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220329101744-2053" hosting pod "kube-apiserver-functional-20220329101744-2053" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	I0329 10:20:23.725944    3917 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-20220329101744-2053" in "kube-system" namespace to be "Ready" ...
	I0329 10:20:23.926377    3917 pod_ready.go:97] node "functional-20220329101744-2053" hosting pod "kube-controller-manager-functional-20220329101744-2053" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	I0329 10:20:23.926383    3917 pod_ready.go:81] duration metric: took 200.436041ms waiting for pod "kube-controller-manager-functional-20220329101744-2053" in "kube-system" namespace to be "Ready" ...
	E0329 10:20:23.926389    3917 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220329101744-2053" hosting pod "kube-controller-manager-functional-20220329101744-2053" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	I0329 10:20:23.926398    3917 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gwwvl" in "kube-system" namespace to be "Ready" ...
	I0329 10:20:24.326778    3917 pod_ready.go:97] node "functional-20220329101744-2053" hosting pod "kube-proxy-gwwvl" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	I0329 10:20:24.326795    3917 pod_ready.go:81] duration metric: took 400.391077ms waiting for pod "kube-proxy-gwwvl" in "kube-system" namespace to be "Ready" ...
	E0329 10:20:24.326799    3917 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220329101744-2053" hosting pod "kube-proxy-gwwvl" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	I0329 10:20:24.326809    3917 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-20220329101744-2053" in "kube-system" namespace to be "Ready" ...
	I0329 10:20:24.726758    3917 pod_ready.go:97] node "functional-20220329101744-2053" hosting pod "kube-scheduler-functional-20220329101744-2053" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	I0329 10:20:24.726770    3917 pod_ready.go:81] duration metric: took 399.955353ms waiting for pod "kube-scheduler-functional-20220329101744-2053" in "kube-system" namespace to be "Ready" ...
	E0329 10:20:24.726776    3917 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-20220329101744-2053" hosting pod "kube-scheduler-functional-20220329101744-2053" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-20220329101744-2053" has status "Ready":"False"
	I0329 10:20:24.726786    3917 pod_ready.go:38] duration metric: took 1.020638246s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0329 10:20:24.726797    3917 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0329 10:20:24.734424    3917 ops.go:34] apiserver oom_adj: -16
	I0329 10:20:24.734429    3917 kubeadm.go:605] restartCluster took 10.009969939s
	I0329 10:20:24.734434    3917 kubeadm.go:393] StartCluster complete in 10.046250554s
	I0329 10:20:24.734446    3917 settings.go:142] acquiring lock: {Name:mk5b01a4191281d3f224b52386a90714bd22cc72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 10:20:24.734536    3917 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 10:20:24.734959    3917 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig: {Name:mk7bef67bea8eb326a483bde80a52ac63c137849 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 10:20:24.738066    3917 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "functional-20220329101744-2053" rescaled to 1
	I0329 10:20:24.738096    3917 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0329 10:20:24.738102    3917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0329 10:20:24.738126    3917 addons.go:415] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I0329 10:20:24.817272    3917 out.go:176] * Verifying Kubernetes components...
	I0329 10:20:24.738266    3917 config.go:176] Loaded profile config "functional-20220329101744-2053": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 10:20:24.817320    3917 addons.go:65] Setting storage-provisioner=true in profile "functional-20220329101744-2053"
	I0329 10:20:24.817322    3917 addons.go:65] Setting default-storageclass=true in profile "functional-20220329101744-2053"
	I0329 10:20:24.795231    3917 start.go:757] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0329 10:20:24.817343    3917 addons.go:153] Setting addon storage-provisioner=true in "functional-20220329101744-2053"
	W0329 10:20:24.843464    3917 addons.go:165] addon storage-provisioner should already be in state true
	I0329 10:20:24.817354    3917 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-20220329101744-2053"
	I0329 10:20:24.843491    3917 host.go:66] Checking if "functional-20220329101744-2053" exists ...
	I0329 10:20:24.817357    3917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0329 10:20:24.843889    3917 cli_runner.go:133] Run: docker container inspect functional-20220329101744-2053 --format={{.State.Status}}
	I0329 10:20:24.844085    3917 cli_runner.go:133] Run: docker container inspect functional-20220329101744-2053 --format={{.State.Status}}
	I0329 10:20:24.855943    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:25.048563    3917 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0329 10:20:25.048734    3917 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 10:20:25.048739    3917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0329 10:20:25.048844    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:25.051116    3917 addons.go:153] Setting addon default-storageclass=true in "functional-20220329101744-2053"
	W0329 10:20:25.051130    3917 addons.go:165] addon default-storageclass should already be in state true
	I0329 10:20:25.051152    3917 host.go:66] Checking if "functional-20220329101744-2053" exists ...
	I0329 10:20:25.051634    3917 cli_runner.go:133] Run: docker container inspect functional-20220329101744-2053 --format={{.State.Status}}
	I0329 10:20:25.054920    3917 node_ready.go:35] waiting up to 6m0s for node "functional-20220329101744-2053" to be "Ready" ...
	I0329 10:20:25.194758    3917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/functional-20220329101744-2053/id_rsa Username:docker}
	I0329 10:20:25.194838    3917 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0329 10:20:25.194845    3917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0329 10:20:25.194931    3917 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220329101744-2053
	I0329 10:20:25.300543    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 10:20:25.329550    3917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52219 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/functional-20220329101744-2053/id_rsa Username:docker}
	I0329 10:20:25.436200    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0329 10:20:26.696351    3917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.395791826s)
	W0329 10:20:26.696368    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:26.696378    3917 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:26.698566    3917 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.262353816s)
	W0329 10:20:26.698580    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:26.698589    3917 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:26.976016    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0329 10:20:27.017384    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:27.017396    3917 retry.go:31] will retry after 436.71002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:27.059148    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0329 10:20:27.101134    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:27.101144    3917 retry.go:31] will retry after 351.64282ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:27.453086    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0329 10:20:27.454423    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0329 10:20:27.498290    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:27.498308    3917 retry.go:31] will retry after 520.108592ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W0329 10:20:27.498327    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:27.498335    3917 retry.go:31] will retry after 667.587979ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:28.027104    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0329 10:20:28.067211    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:28.067222    3917 retry.go:31] will retry after 477.256235ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:28.167158    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0329 10:20:28.206742    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:28.206753    3917 retry.go:31] will retry after 553.938121ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:28.553099    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0329 10:20:28.596404    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:28.596417    3917 retry.go:31] will retry after 755.539547ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:28.765889    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0329 10:20:28.807216    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:28.807226    3917 retry.go:31] will retry after 1.013654073s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:29.362175    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0329 10:20:29.403145    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:29.403157    3917 retry.go:31] will retry after 1.927317724s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:29.823317    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0329 10:20:29.870629    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:29.870640    3917 retry.go:31] will retry after 2.493863364s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:31.340690    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0329 10:20:31.382676    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:31.382686    3917 retry.go:31] will retry after 2.033977981s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:32.367274    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0329 10:20:32.406450    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:32.406459    3917 retry.go:31] will retry after 2.507808949s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:33.424295    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0329 10:20:33.465952    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:33.465963    3917 retry.go:31] will retry after 3.494322709s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:34.922359    3917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0329 10:20:34.965164    3917 addons.go:369] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:34.965173    3917 retry.go:31] will retry after 4.138597834s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	I0329 10:20:36.124259    3917 node_ready.go:53] error getting node "functional-20220329101744-2053": Get "https://127.0.0.1:52218/api/v1/nodes/functional-20220329101744-2053": EOF
	I0329 10:20:36.124270    3917 node_ready.go:38] duration metric: took 11.069351875s waiting for node "functional-20220329101744-2053" to be "Ready" ...
	I0329 10:20:36.151340    3917 out.go:176] 
	W0329 10:20:36.151539    3917 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-20220329101744-2053": Get "https://127.0.0.1:52218/api/v1/nodes/functional-20220329101744-2053": EOF
	W0329 10:20:36.151559    3917 out.go:241] * 
	W0329 10:20:36.152589    3917 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> Docker <==
	* -- Logs begin at Tue 2022-03-29 17:18:07 UTC, end at Tue 2022-03-29 17:20:48 UTC. --
	Mar 29 17:18:33 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:18:33.672703286Z" level=info msg="Daemon has completed initialization"
	Mar 29 17:18:33 functional-20220329101744-2053 systemd[1]: Started Docker Application Container Engine.
	Mar 29 17:18:33 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:18:33.699444606Z" level=info msg="API listen on [::]:2376"
	Mar 29 17:18:33 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:18:33.701575716Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 29 17:19:17 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:19:17.669672925Z" level=info msg="ignoring event" container=74446a4a8dd647f83997f9a63c0b0f056b723088b263b25c5107951c2fa46550 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:19:17 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:19:17.719546718Z" level=info msg="ignoring event" container=b1aa96243016a906f822a673cbf161738b75c4dbfcf460965614f6339b96617e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:19:38 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:19:38.203019550Z" level=info msg="ignoring event" container=162cb93e1f50ddf67d12dd5968c3916bb1d69057a108411f049db0f61f9d112b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:15 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:15.230935582Z" level=info msg="ignoring event" container=7dbb2ebc006ab7bfb8321a4065ac6928974cf531a262fe4ad6f2b674b1733011 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:15 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:15.305509858Z" level=info msg="ignoring event" container=2c02e68a12d6d161370fd0a0ef4e8a6c9db0270fd3ebe58b2606cfd8f6d97568 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:15 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:15.315909561Z" level=info msg="ignoring event" container=2bcc7f96a25123a4428a6b00fdb2cc4f8b9edb18f8283d06e1f26ba0a8584c1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:15 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:15.316250778Z" level=info msg="ignoring event" container=11c80f833d8dde40a4a1cfd2bc12f6c123a6b761dc9b54697bdc0d8bc62dfe05 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:15 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:15.318540873Z" level=info msg="ignoring event" container=2309664b2dca702eb4a61ee5cc3a10938c265903849b7ab4b898dfd66dfe4793 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:15 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:15.326071255Z" level=info msg="ignoring event" container=11b46fa961b383c50e44a98d457917834f2888d4f2afa66a05c8a8be0a270510 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:15 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:15.326119812Z" level=info msg="ignoring event" container=b4206332236c1d6a93ab1667527093fa9b6fd33e53655b558e7df4c22d6151b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:15 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:15.326824157Z" level=info msg="ignoring event" container=45cc2e7c49ebc3ab5b0410a0712d1406a48a343019ea0fde64a2b5f36de3f105 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:15 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:15.330064372Z" level=info msg="ignoring event" container=de8290e9fe44b9444a7685361f59c2f04408bdd82f6595bca170f08ac06d2355 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:15 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:15.334320487Z" level=info msg="ignoring event" container=d60fd8047ed09c80c628c297df3621a110f11a8f65b12c51e8471096e8a23715 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:15 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:15.403148738Z" level=info msg="ignoring event" container=e94d2bb30a5c39caf6ac8526e3658dfd41e986a21cac5901beaf07739134cd45 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:16 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:16.400075314Z" level=info msg="ignoring event" container=00593e4ce0a97d0563d0eb8e475ef1a97f2e34fbd398585a82ac26d7d4cfc406 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:17 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:17.680664680Z" level=info msg="ignoring event" container=6d4f9784ec462af8f6e1dc7edfeafb806b60e78539983fbfa8169fa4eeb793a4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:20 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:20.139898151Z" level=info msg="ignoring event" container=02b83e91f9c63300d6ab9e272bdeda877068a8f89f3e266ced3e0cc0691d70ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:24 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:24.287143994Z" level=info msg="ignoring event" container=f4f608ccfb3cf51c424690f2e06f5b94e0979d41d2b246729a9d04d919f9b091 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:25 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:25.693241520Z" level=info msg="ignoring event" container=3982a341516303c3a5a471a6c61992a7237e9c099e95f5109fb0b2c4ccfa060b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:25 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:25.727538372Z" level=info msg="ignoring event" container=a72b7badb1cf607845d046f1e34fd04ce7bfcfd561c8e4a78a4affb2cc12ca3b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 29 17:20:25 functional-20220329101744-2053 dockerd[468]: time="2022-03-29T17:20:25.780720316Z" level=info msg="ignoring event" container=dae2b195616e4862e4f231515bbba86c37a032df065c2a94d7791ea720fed141 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	61fd97c967136       3fc1d62d65872       5 seconds ago        Running             kube-apiserver            2                   b880bdc9e325a
	dae2b195616e4       3fc1d62d65872       24 seconds ago       Exited              kube-apiserver            1                   b880bdc9e325a
	6df75c7e3990f       a4ca41631cc7a       24 seconds ago       Running             coredns                   1                   629dc99b4434f
	9c35aac5a43a0       6e38f40d628db       25 seconds ago       Running             storage-provisioner       2                   7a7e41db2ffc7
	e1072fa2a0368       884d49d6d8c9f       31 seconds ago       Running             kube-scheduler            1                   0a36befd70d4a
	f36428c4dac6e       25f8c7f3da61c       32 seconds ago       Running             etcd                      1                   4c7cbfe1c073d
	ebc4b70c301ba       b0c9e5e4dbb14       32 seconds ago       Running             kube-controller-manager   1                   50e86ad5d2050
	f6729cadb6286       3c53fa8541f95       33 seconds ago       Running             kube-proxy                1                   a87188636f85f
	7dbb2ebc006ab       6e38f40d628db       About a minute ago   Exited              storage-provisioner       1                   de8290e9fe44b
	02b83e91f9c63       a4ca41631cc7a       About a minute ago   Exited              coredns                   0                   2bcc7f96a2512
	2309664b2dca7       3c53fa8541f95       About a minute ago   Exited              kube-proxy                0                   2c02e68a12d6d
	6d4f9784ec462       884d49d6d8c9f       2 minutes ago        Exited              kube-scheduler            0                   b4206332236c1
	e94d2bb30a5c3       b0c9e5e4dbb14       2 minutes ago        Exited              kube-controller-manager   0                   11b46fa961b38
	d60fd8047ed09       25f8c7f3da61c       2 minutes ago        Exited              etcd                      0                   45cc2e7c49ebc
	
	* 
	* ==> coredns [02b83e91f9c6] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [6df75c7e3990] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	W0329 17:20:25.695667       1 reflector.go:441] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
	W0329 17:20:25.695690       1 reflector.go:441] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
	W0329 17:20:25.695894       1 reflector.go:441] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
	E0329 17:20:26.549780       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=530": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:26.836503       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=530": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:27.046387       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=530": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:28.402204       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=530": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:28.593214       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=530": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:29.128631       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=530": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:33.015602       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=530": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:33.251780       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=530": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:34.397547       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=530": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:41.688736       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=530": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:41.855473       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=530": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:43.799652       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.2/tools/cache/reflector.go:167: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=530": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* Name:               functional-20220329101744-2053
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-20220329101744-2053
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=923781973407d6dc536f326caa216e4920fd75c3
	                    minikube.k8s.io/name=functional-20220329101744-2053
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_03_29T10_18_54_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 29 Mar 2022 17:18:53 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20220329101744-2053
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 29 Mar 2022 17:20:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 29 Mar 2022 17:20:22 +0000   Tue, 29 Mar 2022 17:18:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 29 Mar 2022 17:20:22 +0000   Tue, 29 Mar 2022 17:18:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 29 Mar 2022 17:20:22 +0000   Tue, 29 Mar 2022 17:18:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 29 Mar 2022 17:20:22 +0000   Tue, 29 Mar 2022 17:20:22 +0000   KubeletNotReady              PLEG is not healthy: pleg has yet to be successful
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20220329101744-2053
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6088600Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107077304Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6088600Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                6b084980-74f6-4703-a5da-6626ef65fea8
	  Boot ID:                    76bd837b-cfbb-40c3-8d2c-49de16666973
	  Kernel Version:             5.10.25-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.13
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-cgqv9                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     104s
	  kube-system                 etcd-functional-20220329101744-2053                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         111s
	  kube-system                 kube-apiserver-functional-20220329101744-2053             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  kube-system                 kube-controller-manager-functional-20220329101744-2053    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 kube-proxy-gwwvl                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-scheduler-functional-20220329101744-2053             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (12%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 26s                  kube-proxy  
	  Normal  Starting                 101s                 kube-proxy  
	  Normal  NodeHasNoDiskPressure    2m9s (x3 over 2m9s)  kubelet     Node functional-20220329101744-2053 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s (x3 over 2m9s)  kubelet     Node functional-20220329101744-2053 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m9s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 2m9s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m8s (x4 over 2m9s)  kubelet     Node functional-20220329101744-2053 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s                 kubelet     Node functional-20220329101744-2053 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s                 kubelet     Node functional-20220329101744-2053 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  115s                 kubelet     Node functional-20220329101744-2053 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  115s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                104s                 kubelet     Node functional-20220329101744-2053 status is now: NodeReady
	  Normal  Starting                 28s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s                  kubelet     Node functional-20220329101744-2053 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s                  kubelet     Node functional-20220329101744-2053 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s                  kubelet     Node functional-20220329101744-2053 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             27s                  kubelet     Node functional-20220329101744-2053 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  27s                  kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.034645] bpfilter: read fail 0
	[  +0.028517] bpfilter: read fail 0
	[  +0.028661] bpfilter: read fail 0
	[  +0.028040] bpfilter: write fail -32
	[  +0.024301] bpfilter: write fail -32
	[  +0.040315] bpfilter: write fail -32
	[  +0.026203] bpfilter: write fail -32
	[  +0.037462] bpfilter: read fail 0
	[  +0.035114] bpfilter: write fail -32
	[  +0.029969] bpfilter: write fail -32
	[  +0.030434] bpfilter: read fail 0
	[  +0.032641] bpfilter: write fail -32
	[  +0.031111] bpfilter: write fail -32
	[  +0.026072] bpfilter: write fail -32
	[  +0.028308] bpfilter: read fail 0
	[  +0.026371] bpfilter: write fail -32
	[  +0.028417] bpfilter: write fail -32
	[  +0.027491] bpfilter: write fail -32
	[  +0.033520] bpfilter: write fail -32
	[  +0.038616] bpfilter: write fail -32
	[  +0.039334] bpfilter: write fail -32
	[  +0.031370] bpfilter: read fail 0
	[  +0.028314] bpfilter: read fail 0
	[  +0.038695] bpfilter: read fail 0
	[  +0.031261] bpfilter: read fail 0
	
	* 
	* ==> etcd [d60fd8047ed0] <==
	* {"level":"info","ts":"2022-03-29T17:18:47.703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-03-29T17:18:47.704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-03-29T17:18:47.704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-03-29T17:18:47.704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-29T17:18:47.704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-03-29T17:18:47.704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-29T17:18:47.704Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:18:47.713Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:18:47.713Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-29T17:18:47.713Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:18:47.713Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:18:47.713Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220329101744-2053 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-03-29T17:18:47.713Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-29T17:18:47.713Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-03-29T17:18:47.713Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-03-29T17:18:47.763Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-03-29T17:18:47.763Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-03-29T17:20:15.207Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-03-29T17:20:15.207Z","caller":"embed/etcd.go:367","msg":"closing etcd server","name":"functional-20220329101744-2053","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2022/03/29 17:20:15 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/03/29 17:20:15 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-03-29T17:20:15.217Z","caller":"etcdserver/server.go:1438","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2022-03-29T17:20:15.230Z","caller":"embed/etcd.go:562","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-03-29T17:20:15.231Z","caller":"embed/etcd.go:567","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-03-29T17:20:15.231Z","caller":"embed/etcd.go:369","msg":"closed etcd server","name":"functional-20220329101744-2053","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> etcd [f36428c4dac6] <==
	* {"level":"info","ts":"2022-03-29T17:20:18.427Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.1","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-03-29T17:20:18.427Z","caller":"etcdserver/server.go:744","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-03-29T17:20:18.427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-03-29T17:20:18.428Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-03-29T17:20:18.428Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:20:18.428Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-03-29T17:20:18.430Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-03-29T17:20:18.430Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-03-29T17:20:18.430Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-03-29T17:20:18.430Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-03-29T17:20:18.430Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-03-29T17:20:20.134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2022-03-29T17:20:20.134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2022-03-29T17:20:20.134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-03-29T17:20:20.134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2022-03-29T17:20:20.134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2022-03-29T17:20:20.134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2022-03-29T17:20:20.134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2022-03-29T17:20:20.135Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220329101744-2053 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-03-29T17:20:20.135Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-03-29T17:20:20.135Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-03-29T17:20:20.135Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-29T17:20:20.135Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-03-29T17:20:20.136Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-03-29T17:20:20.136Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  17:20:49 up 9 min,  0 users,  load average: 1.69, 1.76, 1.05
	Linux functional-20220329101744-2053 5.10.25-linuxkit #1 SMP Tue Mar 23 09:27:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [61fd97c96713] <==
	* I0329 17:20:46.492378       1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0329 17:20:46.478635       1 apf_controller.go:317] Starting API Priority and Fairness config controller
	I0329 17:20:46.492658       1 customresource_discovery_controller.go:209] Starting DiscoveryController
	I0329 17:20:46.494994       1 controller.go:85] Starting OpenAPI controller
	I0329 17:20:46.495036       1 naming_controller.go:291] Starting NamingConditionController
	I0329 17:20:46.495049       1 establishing_controller.go:76] Starting EstablishingController
	I0329 17:20:46.495057       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0329 17:20:46.495063       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0329 17:20:46.495071       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0329 17:20:46.497786       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0329 17:20:46.497816       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
	I0329 17:20:46.497833       1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0329 17:20:46.498239       1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0329 17:20:46.575411       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0329 17:20:46.578572       1 cache.go:39] Caches are synced for autoregister controller
	I0329 17:20:46.578919       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0329 17:20:46.579066       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0329 17:20:46.579964       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0329 17:20:46.592684       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0329 17:20:46.597847       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0329 17:20:46.604393       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0329 17:20:47.478422       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0329 17:20:47.482441       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0329 17:20:47.483874       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0329 17:20:48.671429       1 controller.go:611] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [dae2b195616e] <==
	* I0329 17:20:25.690120       1 server.go:565] external host was not specified, using 192.168.49.2
	I0329 17:20:25.690678       1 server.go:172] Version: v1.23.5
	E0329 17:20:25.691027       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	* 
	* ==> kube-controller-manager [e94d2bb30a5c] <==
	* I0329 17:19:04.996654       1 shared_informer.go:247] Caches are synced for PV protection 
	I0329 17:19:04.996663       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0329 17:19:04.997822       1 shared_informer.go:247] Caches are synced for HPA 
	I0329 17:19:04.998545       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0329 17:19:05.003289       1 shared_informer.go:247] Caches are synced for node 
	I0329 17:19:05.003322       1 range_allocator.go:173] Starting range CIDR allocator
	I0329 17:19:05.003325       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0329 17:19:05.003330       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0329 17:19:05.007128       1 range_allocator.go:374] Set node functional-20220329101744-2053 PodCIDR to [10.244.0.0/24]
	I0329 17:19:05.106894       1 shared_informer.go:247] Caches are synced for attach detach 
	I0329 17:19:05.117545       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0329 17:19:05.151020       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0329 17:19:05.196864       1 shared_informer.go:247] Caches are synced for endpoint 
	I0329 17:19:05.203787       1 shared_informer.go:247] Caches are synced for resource quota 
	I0329 17:19:05.205022       1 shared_informer.go:247] Caches are synced for resource quota 
	I0329 17:19:05.402744       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0329 17:19:05.618941       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0329 17:19:05.681966       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0329 17:19:05.682059       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0329 17:19:05.702153       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gwwvl"
	I0329 17:19:05.899693       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-q6b25"
	I0329 17:19:05.904115       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-cgqv9"
	I0329 17:19:05.950596       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0329 17:19:05.953936       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-q6b25"
	I0329 17:19:09.950763       1 node_lifecycle_controller.go:1190] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-controller-manager [ebc4b70c301b] <==
	* I0329 17:20:48.607998       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0329 17:20:48.609527       1 shared_informer.go:247] Caches are synced for cronjob 
	I0329 17:20:48.612592       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0329 17:20:48.639646       1 shared_informer.go:247] Caches are synced for GC 
	I0329 17:20:48.643968       1 shared_informer.go:247] Caches are synced for deployment 
	I0329 17:20:48.645183       1 shared_informer.go:247] Caches are synced for disruption 
	I0329 17:20:48.645192       1 disruption.go:371] Sending events to api server.
	I0329 17:20:48.667630       1 shared_informer.go:247] Caches are synced for PV protection 
	I0329 17:20:48.699543       1 shared_informer.go:247] Caches are synced for attach detach 
	I0329 17:20:48.701363       1 shared_informer.go:247] Caches are synced for expand 
	I0329 17:20:48.706861       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0329 17:20:48.714539       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0329 17:20:48.769538       1 shared_informer.go:247] Caches are synced for endpoint 
	I0329 17:20:48.774164       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0329 17:20:48.811530       1 shared_informer.go:247] Caches are synced for taint 
	I0329 17:20:48.811651       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	I0329 17:20:48.811683       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W0329 17:20:48.811824       1 node_lifecycle_controller.go:1012] Missing timestamp for Node functional-20220329101744-2053. Assuming now as a timestamp.
	I0329 17:20:48.811866       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0329 17:20:48.811869       1 event.go:294] "Event occurred" object="functional-20220329101744-2053" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-20220329101744-2053 event: Registered Node functional-20220329101744-2053 in Controller"
	I0329 17:20:48.835838       1 shared_informer.go:247] Caches are synced for resource quota 
	I0329 17:20:48.850086       1 shared_informer.go:247] Caches are synced for resource quota 
	I0329 17:20:49.263560       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0329 17:20:49.273080       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0329 17:20:49.273135       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [2309664b2dca] <==
	* I0329 17:19:06.260520       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0329 17:19:06.260597       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0329 17:19:06.260619       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0329 17:19:07.807364       1 server_others.go:206] "Using iptables Proxier"
	I0329 17:19:07.807419       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0329 17:19:07.807430       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0329 17:19:07.807440       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0329 17:19:07.807787       1 server.go:656] "Version info" version="v1.23.5"
	I0329 17:19:07.808221       1 config.go:317] "Starting service config controller"
	I0329 17:19:07.808253       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0329 17:19:07.808439       1 config.go:226] "Starting endpoint slice config controller"
	I0329 17:19:07.808517       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0329 17:19:07.908960       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0329 17:19:07.909011       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [f6729cadb628] <==
	* E0329 17:20:17.459691       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220329101744-2053": dial tcp 192.168.49.2:8441: connect: connection refused
	E0329 17:20:18.456065       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220329101744-2053": dial tcp 192.168.49.2:8441: connect: connection refused
	I0329 17:20:22.096421       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0329 17:20:22.096462       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0329 17:20:22.096504       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0329 17:20:22.412685       1 server_others.go:206] "Using iptables Proxier"
	I0329 17:20:22.412732       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0329 17:20:22.412741       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0329 17:20:22.412761       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0329 17:20:22.413164       1 server.go:656] "Version info" version="v1.23.5"
	I0329 17:20:22.413903       1 config.go:317] "Starting service config controller"
	I0329 17:20:22.414507       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0329 17:20:22.414468       1 config.go:226] "Starting endpoint slice config controller"
	I0329 17:20:22.414545       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0329 17:20:22.514588       1 shared_informer.go:247] Caches are synced for service config 
	I0329 17:20:22.514609       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [6d4f9784ec46] <==
	* E0329 17:18:50.056060       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0329 17:18:50.056055       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0329 17:18:50.056074       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0329 17:18:50.056430       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0329 17:18:50.056463       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0329 17:18:50.057266       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0329 17:18:50.057315       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0329 17:18:50.057551       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0329 17:18:50.057593       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0329 17:18:50.057694       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0329 17:18:50.057738       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0329 17:18:50.057851       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0329 17:18:50.057880       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0329 17:18:50.058276       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0329 17:18:50.058306       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0329 17:18:50.907618       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0329 17:18:50.907679       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0329 17:18:51.053725       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0329 17:18:51.053821       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0329 17:18:51.197722       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0329 17:18:51.197756       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0329 17:18:51.451537       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0329 17:20:15.147545       1 secure_serving.go:311] Stopped listening on 127.0.0.1:10259
	I0329 17:20:15.147629       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0329 17:20:15.147735       1 configmap_cafile_content.go:222] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kube-scheduler [e1072fa2a036] <==
	* I0329 17:20:19.411428       1 serving.go:348] Generated self-signed cert in-memory
	W0329 17:20:22.013102       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0329 17:20:22.013136       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0329 17:20:22.013143       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0329 17:20:22.013148       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0329 17:20:22.084126       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.5"
	I0329 17:20:22.085475       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0329 17:20:22.085595       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0329 17:20:22.085621       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0329 17:20:22.085698       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0329 17:20:22.186232       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0329 17:20:46.499787       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
	E0329 17:20:46.499872       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: unknown (get services)
	E0329 17:20:46.499956       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
	E0329 17:20:46.500026       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	E0329 17:20:46.500048       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: unknown (get namespaces)
	E0329 17:20:46.500206       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-03-29 17:18:07 UTC, end at Tue 2022-03-29 17:20:50 UTC. --
	Mar 29 17:20:33 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:33.585517    5770 status_manager.go:604] "Failed to get status for pod" podUID=03cb64c9-c89a-4a7b-a3b0-6f7fd6a0f97d pod="kube-system/coredns-64897985d-cgqv9" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-64897985d-cgqv9\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:33 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:33.585875    5770 status_manager.go:604] "Failed to get status for pod" podUID=e52ce334-96a3-4dfb-9e1d-9977c6a2572d pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:33 functional-20220329101744-2053 kubelet[5770]: E0329 17:20:33.739647    5770 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20220329101744-2053?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Mar 29 17:20:35 functional-20220329101744-2053 kubelet[5770]: E0329 17:20:35.340562    5770 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20220329101744-2053?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Mar 29 17:20:36 functional-20220329101744-2053 kubelet[5770]: E0329 17:20:36.029105    5770 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-functional-20220329101744-2053.16e0e9b4ac1fbff0", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-functional-20220329101744-2053", UID:"d587f4ebedc48a236c0a6b24627b00bf", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"BackOff", Message:"Back-off restarting failed
container", Source:v1.EventSource{Component:"kubelet", Host:"functional-20220329101744-2053"}, FirstTimestamp:time.Date(2022, time.March, 29, 17, 20, 25, 808748528, time.Local), LastTimestamp:time.Date(2022, time.March, 29, 17, 20, 25, 808748528, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events": dial tcp 192.168.49.2:8441: connect: connection refused'(may retry after sleeping)
	Mar 29 17:20:36 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:36.802788    5770 status_manager.go:604] "Failed to get status for pod" podUID=c27d44dd832d3b9434b363863b8bf820 pod="kube-system/kube-scheduler-functional-20220329101744-2053" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20220329101744-2053\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:36 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:36.803588    5770 status_manager.go:604] "Failed to get status for pod" podUID=c27d44dd832d3b9434b363863b8bf820 pod="kube-system/kube-scheduler-functional-20220329101744-2053" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20220329101744-2053\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:37 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:37.606112    5770 status_manager.go:604] "Failed to get status for pod" podUID=1a5661e220be08aa9553aa0cddca9ccc pod="kube-system/etcd-functional-20220329101744-2053" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-20220329101744-2053\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:37 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:37.606376    5770 status_manager.go:604] "Failed to get status for pod" podUID=1a5661e220be08aa9553aa0cddca9ccc pod="kube-system/etcd-functional-20220329101744-2053" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-20220329101744-2053\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:38 functional-20220329101744-2053 kubelet[5770]: E0329 17:20:38.541788    5770 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: Get "https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-20220329101744-2053?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused
	Mar 29 17:20:42 functional-20220329101744-2053 kubelet[5770]: E0329 17:20:42.583284    5770 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"functional-20220329101744-2053\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220329101744-2053?resourceVersion=0&timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:42 functional-20220329101744-2053 kubelet[5770]: E0329 17:20:42.583731    5770 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"functional-20220329101744-2053\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220329101744-2053?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:42 functional-20220329101744-2053 kubelet[5770]: E0329 17:20:42.583900    5770 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"functional-20220329101744-2053\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220329101744-2053?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:42 functional-20220329101744-2053 kubelet[5770]: E0329 17:20:42.584044    5770 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"functional-20220329101744-2053\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220329101744-2053?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:42 functional-20220329101744-2053 kubelet[5770]: E0329 17:20:42.584207    5770 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"functional-20220329101744-2053\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220329101744-2053?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:42 functional-20220329101744-2053 kubelet[5770]: E0329 17:20:42.584247    5770 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count"
	Mar 29 17:20:43 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:43.583711    5770 status_manager.go:604] "Failed to get status for pod" podUID=03cb64c9-c89a-4a7b-a3b0-6f7fd6a0f97d pod="kube-system/coredns-64897985d-cgqv9" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-64897985d-cgqv9\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:43 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:43.584574    5770 status_manager.go:604] "Failed to get status for pod" podUID=e52ce334-96a3-4dfb-9e1d-9977c6a2572d pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:43 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:43.584925    5770 status_manager.go:604] "Failed to get status for pod" podUID=d587f4ebedc48a236c0a6b24627b00bf pod="kube-system/kube-apiserver-functional-20220329101744-2053" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-20220329101744-2053\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:43 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:43.585256    5770 status_manager.go:604] "Failed to get status for pod" podUID=18e40e400faed752d348d0030ea8fe2a pod="kube-system/kube-controller-manager-functional-20220329101744-2053" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-20220329101744-2053\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:43 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:43.585869    5770 status_manager.go:604] "Failed to get status for pod" podUID=1a5661e220be08aa9553aa0cddca9ccc pod="kube-system/etcd-functional-20220329101744-2053" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-20220329101744-2053\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:43 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:43.586294    5770 status_manager.go:604] "Failed to get status for pod" podUID=c27d44dd832d3b9434b363863b8bf820 pod="kube-system/kube-scheduler-functional-20220329101744-2053" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-20220329101744-2053\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:43 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:43.586643    5770 status_manager.go:604] "Failed to get status for pod" podUID=f9c4335e-3686-4617-91d5-454f38d2a099 pod="kube-system/kube-proxy-gwwvl" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-gwwvl\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Mar 29 17:20:44 functional-20220329101744-2053 kubelet[5770]: I0329 17:20:44.582335    5770 scope.go:110] "RemoveContainer" containerID="dae2b195616e4862e4f231515bbba86c37a032df065c2a94d7791ea720fed141"
	Mar 29 17:20:46 functional-20220329101744-2053 kubelet[5770]: E0329 17:20:46.498798    5770 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	
	* 
	* ==> storage-provisioner [7dbb2ebc006a] <==
	* I0329 17:19:38.830112       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0329 17:19:38.837326       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0329 17:19:38.837369       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0329 17:19:38.851718       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0329 17:19:38.851883       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20220329101744-2053_98a5b9cb-6f01-47bb-9d87-599bd5057270!
	I0329 17:19:38.852345       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"65681de1-eb40-4bef-8ff2-9df41038f287", APIVersion:"v1", ResourceVersion:"497", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20220329101744-2053_98a5b9cb-6f01-47bb-9d87-599bd5057270 became leader
	I0329 17:19:38.952178       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20220329101744-2053_98a5b9cb-6f01-47bb-9d87-599bd5057270!
	
	* 
	* ==> storage-provisioner [9c35aac5a43a] <==
	* I0329 17:20:24.682433       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0329 17:20:24.690434       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0329 17:20:24.690459       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0329 17:20:28.149045       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:32.407796       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:36.003934       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:39.058308       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:42.081556       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0329 17:20:46.519178       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: endpoints "k8s.io-minikube-hostpath" is forbidden: User "system:serviceaccount:kube-system:storage-provisioner" cannot get resource "endpoints" in API group "" in the namespace "kube-system"
	I0329 17:20:48.673812       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0329 17:20:48.673977       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"65681de1-eb40-4bef-8ff2-9df41038f287", APIVersion:"v1", ResourceVersion:"579", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20220329101744-2053_f44706d7-4241-4972-95dc-964f6e7446cb became leader
	I0329 17:20:48.674192       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20220329101744-2053_f44706d7-4241-4972-95dc-964f6e7446cb!
	I0329 17:20:48.774404       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20220329101744-2053_f44706d7-4241-4972-95dc-964f6e7446cb!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-20220329101744-2053 -n functional-20220329101744-2053
helpers_test.go:262: (dbg) Run:  kubectl --context functional-20220329101744-2053 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: 
helpers_test.go:273: ======> post-mortem[TestFunctional/serial/ComponentHealth]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context functional-20220329101744-2053 describe pod 
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context functional-20220329101744-2053 describe pod : exit status 1 (41.931198ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context functional-20220329101744-2053 describe pod : exit status 1
--- FAIL: TestFunctional/serial/ComponentHealth (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (550.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-weave-20220329110226-2053 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker 
E0329 11:25:46.487157    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
E0329 11:25:57.292234    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p custom-weave-20220329110226-2053 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker : exit status 105 (9m10.297180127s)

                                                
                                                
-- stdout --
	* [custom-weave-20220329110226-2053] minikube v1.25.2 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13730
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node custom-weave-20220329110226-2053 in cluster custom-weave-20220329110226-2053
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring testdata/weavenet.yaml (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0329 11:25:40.446131   20307 out.go:297] Setting OutFile to fd 1 ...
	I0329 11:25:40.446291   20307 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 11:25:40.446296   20307 out.go:310] Setting ErrFile to fd 2...
	I0329 11:25:40.446300   20307 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 11:25:40.446380   20307 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/bin
	I0329 11:25:40.446728   20307 out.go:304] Setting JSON to false
	I0329 11:25:40.462944   20307 start.go:114] hostinfo: {"hostname":"37310.local","uptime":5115,"bootTime":1648573225,"procs":322,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0329 11:25:40.463036   20307 start.go:122] gopshost.Virtualization returned error: not implemented yet
	I0329 11:25:40.509936   20307 out.go:176] * [custom-weave-20220329110226-2053] minikube v1.25.2 on Darwin 11.2.3
	I0329 11:25:40.510058   20307 notify.go:193] Checking for updates...
	I0329 11:25:40.535787   20307 out.go:176]   - MINIKUBE_LOCATION=13730
	I0329 11:25:40.561921   20307 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 11:25:40.587922   20307 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0329 11:25:40.613740   20307 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0329 11:25:40.639905   20307 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	I0329 11:25:40.640299   20307 config.go:176] Loaded profile config "cilium-20220329110226-2053": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 11:25:40.640359   20307 driver.go:346] Setting default libvirt URI to qemu:///system
	I0329 11:25:40.743519   20307 docker.go:137] docker version: linux-20.10.6
	I0329 11:25:40.743693   20307 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 11:25:40.941589   20307 info.go:263] docker info: {ID:4GJZ:5WQJ:PTOH:OBGV:UGLB:2QMR:SRUC:WPW4:I7LT:V2VN:S3VH:GWN3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:51 SystemTime:2022-03-29 18:25:40.881068102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0329 11:25:40.969491   20307 out.go:176] * Using the docker driver based on user configuration
	I0329 11:25:40.969509   20307 start.go:283] selected driver: docker
	I0329 11:25:40.969513   20307 start.go:800] validating driver "docker" against <nil>
	I0329 11:25:40.969529   20307 start.go:811] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0329 11:25:40.971792   20307 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 11:25:41.164261   20307 info.go:263] docker info: {ID:4GJZ:5WQJ:PTOH:OBGV:UGLB:2QMR:SRUC:WPW4:I7LT:V2VN:S3VH:GWN3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:51 SystemTime:2022-03-29 18:25:41.106217386 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0329 11:25:41.164408   20307 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0329 11:25:41.164563   20307 start_flags.go:837] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0329 11:25:41.164579   20307 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0329 11:25:41.164603   20307 start_flags.go:301] Found "testdata/weavenet.yaml" CNI - setting NetworkPlugin=cni
	I0329 11:25:41.164611   20307 start_flags.go:306] config:
	{Name:custom-weave-20220329110226-2053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220329110226-2053 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 11:25:41.190304   20307 out.go:176] * Starting control plane node custom-weave-20220329110226-2053 in cluster custom-weave-20220329110226-2053
	I0329 11:25:41.190342   20307 cache.go:120] Beginning downloading kic base image for docker with docker
	I0329 11:25:41.237087   20307 out.go:176] * Pulling base image ...
	I0329 11:25:41.237229   20307 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 11:25:41.237289   20307 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0329 11:25:41.237304   20307 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0329 11:25:41.237356   20307 cache.go:57] Caching tarball of preloaded images
	I0329 11:25:41.237526   20307 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0329 11:25:41.237542   20307 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on docker
	I0329 11:25:41.238325   20307 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/config.json ...
	I0329 11:25:41.238430   20307 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/config.json: {Name:mk26bff9e02125897b05811db0a5e2955b97f856 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 11:25:41.366384   20307 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0329 11:25:41.366404   20307 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0329 11:25:41.366416   20307 cache.go:208] Successfully downloaded all kic artifacts
	I0329 11:25:41.366465   20307 start.go:348] acquiring machines lock for custom-weave-20220329110226-2053: {Name:mk3f30a53648490283a3451dec176ddc52ee260d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 11:25:41.366605   20307 start.go:352] acquired machines lock for "custom-weave-20220329110226-2053" in 128.042µs
	I0329 11:25:41.366657   20307 start.go:90] Provisioning new machine with config: &{Name:custom-weave-20220329110226-2053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220329110226-2053 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0329 11:25:41.366759   20307 start.go:127] createHost starting for "" (driver="docker")
	I0329 11:25:41.413866   20307 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0329 11:25:41.414073   20307 start.go:161] libmachine.API.Create for "custom-weave-20220329110226-2053" (driver="docker")
	I0329 11:25:41.414105   20307 client.go:168] LocalClient.Create starting
	I0329 11:25:41.414225   20307 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem
	I0329 11:25:41.414273   20307 main.go:130] libmachine: Decoding PEM data...
	I0329 11:25:41.414289   20307 main.go:130] libmachine: Parsing certificate...
	I0329 11:25:41.414343   20307 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem
	I0329 11:25:41.414375   20307 main.go:130] libmachine: Decoding PEM data...
	I0329 11:25:41.414386   20307 main.go:130] libmachine: Parsing certificate...
	I0329 11:25:41.414848   20307 cli_runner.go:133] Run: docker network inspect custom-weave-20220329110226-2053 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0329 11:25:41.536410   20307 cli_runner.go:180] docker network inspect custom-weave-20220329110226-2053 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0329 11:25:41.536520   20307 network_create.go:262] running [docker network inspect custom-weave-20220329110226-2053] to gather additional debugging logs...
	I0329 11:25:41.536546   20307 cli_runner.go:133] Run: docker network inspect custom-weave-20220329110226-2053
	W0329 11:25:41.665123   20307 cli_runner.go:180] docker network inspect custom-weave-20220329110226-2053 returned with exit code 1
	I0329 11:25:41.665147   20307 network_create.go:265] error running [docker network inspect custom-weave-20220329110226-2053]: docker network inspect custom-weave-20220329110226-2053: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-weave-20220329110226-2053
	I0329 11:25:41.665164   20307 network_create.go:267] output of [docker network inspect custom-weave-20220329110226-2053]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-weave-20220329110226-2053
	
	** /stderr **
	I0329 11:25:41.665260   20307 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0329 11:25:41.789721   20307 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0005e6b40] misses:0}
	I0329 11:25:41.789762   20307 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0329 11:25:41.789781   20307 network_create.go:114] attempt to create docker network custom-weave-20220329110226-2053 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0329 11:25:41.789866   20307 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220329110226-2053
	I0329 11:25:44.609773   20307 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true custom-weave-20220329110226-2053: (2.819862746s)
	I0329 11:25:44.609795   20307 network_create.go:98] docker network custom-weave-20220329110226-2053 192.168.49.0/24 created
	I0329 11:25:44.609817   20307 kic.go:106] calculated static IP "192.168.49.2" for the "custom-weave-20220329110226-2053" container
	I0329 11:25:44.609932   20307 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0329 11:25:44.731385   20307 cli_runner.go:133] Run: docker volume create custom-weave-20220329110226-2053 --label name.minikube.sigs.k8s.io=custom-weave-20220329110226-2053 --label created_by.minikube.sigs.k8s.io=true
	I0329 11:25:44.849805   20307 oci.go:102] Successfully created a docker volume custom-weave-20220329110226-2053
	I0329 11:25:44.849922   20307 cli_runner.go:133] Run: docker run --rm --name custom-weave-20220329110226-2053-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220329110226-2053 --entrypoint /usr/bin/test -v custom-weave-20220329110226-2053:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0329 11:25:45.358870   20307 oci.go:106] Successfully prepared a docker volume custom-weave-20220329110226-2053
	I0329 11:25:45.358912   20307 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 11:25:45.358926   20307 kic.go:179] Starting extracting preloaded images to volume ...
	I0329 11:25:45.359042   20307 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220329110226-2053:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0329 11:25:51.202492   20307 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-weave-20220329110226-2053:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (5.843378296s)
	I0329 11:25:51.202514   20307 kic.go:188] duration metric: took 5.843596 seconds to extract preloaded images to volume
	I0329 11:25:51.202640   20307 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0329 11:25:51.393532   20307 cli_runner.go:133] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220329110226-2053 --name custom-weave-20220329110226-2053 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220329110226-2053 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220329110226-2053 --network custom-weave-20220329110226-2053 --ip 192.168.49.2 --volume custom-weave-20220329110226-2053:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0329 11:25:54.010719   20307 cli_runner.go:186] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-weave-20220329110226-2053 --name custom-weave-20220329110226-2053 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-weave-20220329110226-2053 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-weave-20220329110226-2053 --network custom-weave-20220329110226-2053 --ip 192.168.49.2 --volume custom-weave-20220329110226-2053:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5: (2.617106543s)
	I0329 11:25:54.010847   20307 cli_runner.go:133] Run: docker container inspect custom-weave-20220329110226-2053 --format={{.State.Running}}
	I0329 11:25:54.150307   20307 cli_runner.go:133] Run: docker container inspect custom-weave-20220329110226-2053 --format={{.State.Status}}
	I0329 11:25:54.280639   20307 cli_runner.go:133] Run: docker exec custom-weave-20220329110226-2053 stat /var/lib/dpkg/alternatives/iptables
	I0329 11:25:54.461615   20307 oci.go:278] the created container "custom-weave-20220329110226-2053" has a running status.
	I0329 11:25:54.461643   20307 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/custom-weave-20220329110226-2053/id_rsa...
	I0329 11:25:54.616709   20307 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/custom-weave-20220329110226-2053/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0329 11:25:54.812613   20307 cli_runner.go:133] Run: docker container inspect custom-weave-20220329110226-2053 --format={{.State.Status}}
	I0329 11:25:54.939130   20307 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0329 11:25:54.939153   20307 kic_runner.go:114] Args: [docker exec --privileged custom-weave-20220329110226-2053 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0329 11:25:55.128607   20307 cli_runner.go:133] Run: docker container inspect custom-weave-20220329110226-2053 --format={{.State.Status}}
	I0329 11:25:55.259814   20307 machine.go:88] provisioning docker machine ...
	I0329 11:25:55.259885   20307 ubuntu.go:169] provisioning hostname "custom-weave-20220329110226-2053"
	I0329 11:25:55.260079   20307 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220329110226-2053
	I0329 11:25:55.395521   20307 main.go:130] libmachine: Using SSH client type: native
	I0329 11:25:55.395744   20307 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13a3660] 0x13a6740 <nil>  [] 0s} 127.0.0.1 51077 <nil> <nil>}
	I0329 11:25:55.395760   20307 main.go:130] libmachine: About to run SSH command:
	sudo hostname custom-weave-20220329110226-2053 && echo "custom-weave-20220329110226-2053" | sudo tee /etc/hostname
	I0329 11:25:55.397914   20307 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0329 11:25:58.538218   20307 main.go:130] libmachine: SSH cmd err, output: <nil>: custom-weave-20220329110226-2053
	
	I0329 11:25:58.538327   20307 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220329110226-2053
	I0329 11:25:58.660092   20307 main.go:130] libmachine: Using SSH client type: native
	I0329 11:25:58.660252   20307 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13a3660] 0x13a6740 <nil>  [] 0s} 127.0.0.1 51077 <nil> <nil>}
	I0329 11:25:58.660267   20307 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-weave-20220329110226-2053' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-weave-20220329110226-2053/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-weave-20220329110226-2053' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0329 11:25:58.782146   20307 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0329 11:25:58.782170   20307 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.p
em ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube}
	I0329 11:25:58.782187   20307 ubuntu.go:177] setting up certificates
	I0329 11:25:58.782199   20307 provision.go:83] configureAuth start
	I0329 11:25:58.782330   20307 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220329110226-2053
	I0329 11:25:58.904162   20307 provision.go:138] copyHostCerts
	I0329 11:25:58.904257   20307 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem, removing ...
	I0329 11:25:58.904265   20307 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem
	I0329 11:25:58.904359   20307 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.pem (1078 bytes)
	I0329 11:25:58.904560   20307 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem, removing ...
	I0329 11:25:58.904569   20307 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem
	I0329 11:25:58.904634   20307 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cert.pem (1123 bytes)
	I0329 11:25:58.904805   20307 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem, removing ...
	I0329 11:25:58.904812   20307 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem
	I0329 11:25:58.904879   20307 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/key.pem (1679 bytes)
	I0329 11:25:58.905003   20307 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem org=jenkins.custom-weave-20220329110226-2053 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube custom-weave-20220329110226-2053]
	I0329 11:25:59.102796   20307 provision.go:172] copyRemoteCerts
	I0329 11:25:59.102855   20307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0329 11:25:59.102912   20307 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220329110226-2053
	I0329 11:25:59.226773   20307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51077 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/custom-weave-20220329110226-2053/id_rsa Username:docker}
	I0329 11:25:59.316423   20307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0329 11:25:59.335259   20307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0329 11:25:59.352710   20307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0329 11:25:59.373147   20307 provision.go:86] duration metric: configureAuth took 590.929733ms
	I0329 11:25:59.373160   20307 ubuntu.go:193] setting minikube options for container-runtime
	I0329 11:25:59.373319   20307 config.go:176] Loaded profile config "custom-weave-20220329110226-2053": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 11:25:59.373417   20307 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220329110226-2053
	I0329 11:25:59.493873   20307 main.go:130] libmachine: Using SSH client type: native
	I0329 11:25:59.494025   20307 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13a3660] 0x13a6740 <nil>  [] 0s} 127.0.0.1 51077 <nil> <nil>}
	I0329 11:25:59.494040   20307 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0329 11:25:59.627962   20307 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0329 11:25:59.627976   20307 ubuntu.go:71] root file system type: overlay
	I0329 11:25:59.628162   20307 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0329 11:25:59.628268   20307 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220329110226-2053
	I0329 11:25:59.750758   20307 main.go:130] libmachine: Using SSH client type: native
	I0329 11:25:59.750922   20307 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13a3660] 0x13a6740 <nil>  [] 0s} 127.0.0.1 51077 <nil> <nil>}
	I0329 11:25:59.750969   20307 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0329 11:25:59.888145   20307 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0329 11:25:59.888277   20307 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220329110226-2053
	I0329 11:26:00.016542   20307 main.go:130] libmachine: Using SSH client type: native
	I0329 11:26:00.016687   20307 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x13a3660] 0x13a6740 <nil>  [] 0s} 127.0.0.1 51077 <nil> <nil>}
	I0329 11:26:00.016707   20307 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0329 11:26:15.390996   20307 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-03-10 14:05:44.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-03-29 18:25:59.895310793 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0329 11:26:15.391025   20307 machine.go:91] provisioned docker machine in 20.131211932s
	I0329 11:26:15.391033   20307 client.go:171] LocalClient.Create took 33.976967213s
	I0329 11:26:15.391057   20307 start.go:169] duration metric: libmachine.API.Create for "custom-weave-20220329110226-2053" took 33.977025979s
	I0329 11:26:15.391078   20307 start.go:302] post-start starting for "custom-weave-20220329110226-2053" (driver="docker")
	I0329 11:26:15.391087   20307 start.go:312] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0329 11:26:15.391178   20307 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0329 11:26:15.391292   20307 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220329110226-2053
	I0329 11:26:15.522864   20307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51077 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/custom-weave-20220329110226-2053/id_rsa Username:docker}
	I0329 11:26:15.611504   20307 ssh_runner.go:195] Run: cat /etc/os-release
	I0329 11:26:15.615391   20307 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0329 11:26:15.615409   20307 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0329 11:26:15.615415   20307 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0329 11:26:15.615422   20307 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0329 11:26:15.615433   20307 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/addons for local assets ...
	I0329 11:26:15.615538   20307 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files for local assets ...
	I0329 11:26:15.615691   20307 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/20532.pem -> 20532.pem in /etc/ssl/certs
	I0329 11:26:15.615862   20307 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0329 11:26:15.623468   20307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/20532.pem --> /etc/ssl/certs/20532.pem (1708 bytes)
	I0329 11:26:15.640809   20307 start.go:305] post-start completed in 249.720485ms
	I0329 11:26:15.641366   20307 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220329110226-2053
	I0329 11:26:15.757476   20307 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/config.json ...
	I0329 11:26:15.757884   20307 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0329 11:26:15.757952   20307 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220329110226-2053
	I0329 11:26:15.878596   20307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51077 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/custom-weave-20220329110226-2053/id_rsa Username:docker}
	I0329 11:26:15.962696   20307 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0329 11:26:15.967291   20307 start.go:130] duration metric: createHost completed in 34.600567818s
	I0329 11:26:15.967313   20307 start.go:81] releasing machines lock for "custom-weave-20220329110226-2053", held for 34.600737328s
	I0329 11:26:15.967455   20307 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-weave-20220329110226-2053
	I0329 11:26:16.088361   20307 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0329 11:26:16.088368   20307 ssh_runner.go:195] Run: systemctl --version
	I0329 11:26:16.088444   20307 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220329110226-2053
	I0329 11:26:16.088480   20307 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220329110226-2053
	I0329 11:26:16.215040   20307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51077 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/custom-weave-20220329110226-2053/id_rsa Username:docker}
	I0329 11:26:16.215048   20307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51077 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/custom-weave-20220329110226-2053/id_rsa Username:docker}
	I0329 11:26:16.394887   20307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0329 11:26:16.404163   20307 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 11:26:16.413268   20307 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0329 11:26:16.413341   20307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0329 11:26:16.422827   20307 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0329 11:26:16.435910   20307 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0329 11:26:16.492498   20307 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0329 11:26:16.548344   20307 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0329 11:26:16.558724   20307 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0329 11:26:16.614968   20307 ssh_runner.go:195] Run: sudo systemctl start docker
	I0329 11:26:16.624855   20307 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 11:26:16.681212   20307 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0329 11:26:16.745885   20307 out.go:203] * Preparing Kubernetes v1.23.5 on Docker 20.10.13 ...
	I0329 11:26:16.746064   20307 cli_runner.go:133] Run: docker exec -t custom-weave-20220329110226-2053 dig +short host.docker.internal
	I0329 11:26:16.915983   20307 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0329 11:26:16.916080   20307 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0329 11:26:16.920830   20307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0329 11:26:16.930502   20307 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-weave-20220329110226-2053
	I0329 11:26:17.047732   20307 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 11:26:17.047813   20307 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0329 11:26:17.080104   20307 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0329 11:26:17.080121   20307 docker.go:537] Images already preloaded, skipping extraction
	I0329 11:26:17.080242   20307 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0329 11:26:17.110962   20307 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.5
	k8s.gcr.io/kube-proxy:v1.23.5
	k8s.gcr.io/kube-scheduler:v1.23.5
	k8s.gcr.io/kube-controller-manager:v1.23.5
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0329 11:26:17.110979   20307 cache_images.go:84] Images are preloaded, skipping loading
	I0329 11:26:17.111090   20307 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0329 11:26:17.205717   20307 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0329 11:26:17.205765   20307 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0329 11:26:17.205794   20307 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-weave-20220329110226-2053 NodeName:custom-weave-20220329110226-2053 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0329 11:26:17.205906   20307 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "custom-weave-20220329110226-2053"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0329 11:26:17.205991   20307 kubeadm.go:936] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=custom-weave-20220329110226-2053 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220329110226-2053 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:}
	I0329 11:26:17.206048   20307 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0329 11:26:17.214915   20307 binaries.go:44] Found k8s binaries, skipping transfer
	I0329 11:26:17.214971   20307 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0329 11:26:17.222841   20307 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0329 11:26:17.234672   20307 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0329 11:26:17.247442   20307 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0329 11:26:17.260036   20307 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0329 11:26:17.263475   20307 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0329 11:26:17.272873   20307 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053 for IP: 192.168.49.2
	I0329 11:26:17.273001   20307 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key
	I0329 11:26:17.273060   20307 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key
	I0329 11:26:17.273111   20307 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/client.key
	I0329 11:26:17.273128   20307 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/client.crt with IP's: []
	I0329 11:26:17.510528   20307 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/client.crt ...
	I0329 11:26:17.510544   20307 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/client.crt: {Name:mkf498cdb5a46a77ec16f3fc7948d8ffaae34f1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 11:26:17.510870   20307 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/client.key ...
	I0329 11:26:17.510878   20307 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/client.key: {Name:mk5dc7c9e7f280433833a89b412e9b558596f9a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 11:26:17.511072   20307 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/apiserver.key.dd3b5fb2
	I0329 11:26:17.511091   20307 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0329 11:26:17.577230   20307 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/apiserver.crt.dd3b5fb2 ...
	I0329 11:26:17.577239   20307 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/apiserver.crt.dd3b5fb2: {Name:mke69c27c5c6d3bad1397a7456b464697a35a96f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 11:26:17.577450   20307 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/apiserver.key.dd3b5fb2 ...
	I0329 11:26:17.577459   20307 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/apiserver.key.dd3b5fb2: {Name:mk0104ddbeb959dc6789a663926b860ed7c5ab51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 11:26:17.577620   20307 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/apiserver.crt
	I0329 11:26:17.577792   20307 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/apiserver.key
	I0329 11:26:17.577952   20307 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/proxy-client.key
	I0329 11:26:17.577968   20307 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/proxy-client.crt with IP's: []
	I0329 11:26:17.674168   20307 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/proxy-client.crt ...
	I0329 11:26:17.674181   20307 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/proxy-client.crt: {Name:mk6f6184ee662c601adb71898a68b09200aaab0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 11:26:17.674436   20307 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/proxy-client.key ...
	I0329 11:26:17.674444   20307 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/proxy-client.key: {Name:mkecb60a24b1829140b6d63884fc0b410aa2cac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 11:26:17.674810   20307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/2053.pem (1338 bytes)
	W0329 11:26:17.674859   20307 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/2053_empty.pem, impossibly tiny 0 bytes
	I0329 11:26:17.674870   20307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca-key.pem (1675 bytes)
	I0329 11:26:17.674906   20307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/ca.pem (1078 bytes)
	I0329 11:26:17.674943   20307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/cert.pem (1123 bytes)
	I0329 11:26:17.674974   20307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/key.pem (1679 bytes)
	I0329 11:26:17.675043   20307 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/20532.pem (1708 bytes)
	I0329 11:26:17.675557   20307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0329 11:26:17.694329   20307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0329 11:26:17.711191   20307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0329 11:26:17.728342   20307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/custom-weave-20220329110226-2053/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0329 11:26:17.744964   20307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0329 11:26:17.761507   20307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0329 11:26:17.778638   20307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0329 11:26:17.794933   20307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0329 11:26:17.812985   20307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0329 11:26:17.830333   20307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/certs/2053.pem --> /usr/share/ca-certificates/2053.pem (1338 bytes)
	I0329 11:26:17.846764   20307 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/ssl/certs/20532.pem --> /usr/share/ca-certificates/20532.pem (1708 bytes)
	I0329 11:26:17.864161   20307 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0329 11:26:17.876777   20307 ssh_runner.go:195] Run: openssl version
	I0329 11:26:17.884973   20307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20532.pem && ln -fs /usr/share/ca-certificates/20532.pem /etc/ssl/certs/20532.pem"
	I0329 11:26:17.894030   20307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20532.pem
	I0329 11:26:17.897755   20307 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Mar 29 17:17 /usr/share/ca-certificates/20532.pem
	I0329 11:26:17.897802   20307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20532.pem
	I0329 11:26:17.903260   20307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20532.pem /etc/ssl/certs/3ec20f2e.0"
	I0329 11:26:17.911199   20307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0329 11:26:17.920792   20307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0329 11:26:17.924989   20307 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Mar 29 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0329 11:26:17.925039   20307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0329 11:26:17.930627   20307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0329 11:26:17.938260   20307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2053.pem && ln -fs /usr/share/ca-certificates/2053.pem /etc/ssl/certs/2053.pem"
	I0329 11:26:17.946481   20307 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2053.pem
	I0329 11:26:17.950464   20307 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Mar 29 17:17 /usr/share/ca-certificates/2053.pem
	I0329 11:26:17.950514   20307 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2053.pem
	I0329 11:26:17.956088   20307 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2053.pem /etc/ssl/certs/51391683.0"
	I0329 11:26:17.963847   20307 kubeadm.go:391] StartCluster: {Name:custom-weave-20220329110226-2053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:custom-weave-20220329110226-2053 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/weavenet.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false}
	I0329 11:26:17.963959   20307 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0329 11:26:17.993505   20307 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0329 11:26:18.000832   20307 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0329 11:26:18.008180   20307 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0329 11:26:18.008239   20307 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0329 11:26:18.015283   20307 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0329 11:26:18.015621   20307 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0329 11:26:31.573417   20307 out.go:203]   - Generating certificates and keys ...
	I0329 11:26:31.662527   20307 out.go:203]   - Booting up control plane ...
	I0329 11:26:31.692519   20307 out.go:203]   - Configuring RBAC rules ...
	I0329 11:26:31.694574   20307 cni.go:93] Creating CNI manager for "testdata/weavenet.yaml"
	I0329 11:26:31.792467   20307 out.go:176] * Configuring testdata/weavenet.yaml (Container Networking Interface) ...
	I0329 11:26:31.792558   20307 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0329 11:26:31.792610   20307 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0329 11:26:31.799372   20307 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/tmp/minikube/cni.yaml': No such file or directory
	I0329 11:26:31.799399   20307 ssh_runner.go:362] scp testdata/weavenet.yaml --> /var/tmp/minikube/cni.yaml (10948 bytes)
	I0329 11:26:31.817159   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0329 11:26:32.419884   20307 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0329 11:26:32.419969   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=923781973407d6dc536f326caa216e4920fd75c3 minikube.k8s.io/name=custom-weave-20220329110226-2053 minikube.k8s.io/updated_at=2022_03_29T11_26_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:32.419972   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:32.482916   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:32.490067   20307 ops.go:34] apiserver oom_adj: -16
	I0329 11:26:33.046502   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:33.544030   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:34.043121   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:34.542740   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:35.042505   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:35.547565   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:36.042354   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:36.542887   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:37.042464   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:37.542611   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:38.042412   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:38.542368   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:39.042385   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:39.547680   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:40.042350   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:40.542386   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:41.048130   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:41.547773   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:42.044888   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:42.542563   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:43.043549   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:43.542454   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:44.042770   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:44.546321   20307 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0329 11:26:44.600755   20307 kubeadm.go:1020] duration metric: took 12.180873684s to wait for elevateKubeSystemPrivileges.
	I0329 11:26:44.600784   20307 kubeadm.go:393] StartCluster complete in 26.636978156s
	I0329 11:26:44.600800   20307 settings.go:142] acquiring lock: {Name:mk5b01a4191281d3f224b52386a90714bd22cc72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 11:26:44.600894   20307 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 11:26:44.601354   20307 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig: {Name:mk7bef67bea8eb326a483bde80a52ac63c137849 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 11:26:45.120090   20307 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "custom-weave-20220329110226-2053" rescaled to 1
	I0329 11:26:45.120133   20307 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0329 11:26:45.120142   20307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0329 11:26:45.120165   20307 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0329 11:26:45.147653   20307 out.go:176] * Verifying Kubernetes components...
	I0329 11:26:45.120319   20307 config.go:176] Loaded profile config "custom-weave-20220329110226-2053": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 11:26:45.147727   20307 addons.go:65] Setting storage-provisioner=true in profile "custom-weave-20220329110226-2053"
	I0329 11:26:45.147774   20307 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0329 11:26:45.147789   20307 addons.go:153] Setting addon storage-provisioner=true in "custom-weave-20220329110226-2053"
	I0329 11:26:45.147727   20307 addons.go:65] Setting default-storageclass=true in profile "custom-weave-20220329110226-2053"
	W0329 11:26:45.147799   20307 addons.go:165] addon storage-provisioner should already be in state true
	I0329 11:26:45.147829   20307 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-weave-20220329110226-2053"
	I0329 11:26:45.147834   20307 host.go:66] Checking if "custom-weave-20220329110226-2053" exists ...
	I0329 11:26:45.148132   20307 cli_runner.go:133] Run: docker container inspect custom-weave-20220329110226-2053 --format={{.State.Status}}
	I0329 11:26:45.148232   20307 cli_runner.go:133] Run: docker container inspect custom-weave-20220329110226-2053 --format={{.State.Status}}
	I0329 11:26:45.178173   20307 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" custom-weave-20220329110226-2053
	I0329 11:26:45.178216   20307 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0329 11:26:45.309269   20307 addons.go:153] Setting addon default-storageclass=true in "custom-weave-20220329110226-2053"
	W0329 11:26:45.331086   20307 addons.go:165] addon default-storageclass should already be in state true
	I0329 11:26:45.331059   20307 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0329 11:26:45.331106   20307 host.go:66] Checking if "custom-weave-20220329110226-2053" exists ...
	I0329 11:26:45.331259   20307 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 11:26:45.331269   20307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0329 11:26:45.331359   20307 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220329110226-2053
	I0329 11:26:45.331755   20307 cli_runner.go:133] Run: docker container inspect custom-weave-20220329110226-2053 --format={{.State.Status}}
	I0329 11:26:45.339811   20307 start.go:777] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0329 11:26:45.345824   20307 node_ready.go:35] waiting up to 5m0s for node "custom-weave-20220329110226-2053" to be "Ready" ...
	I0329 11:26:45.349540   20307 node_ready.go:49] node "custom-weave-20220329110226-2053" has status "Ready":"True"
	I0329 11:26:45.349550   20307 node_ready.go:38] duration metric: took 3.706406ms waiting for node "custom-weave-20220329110226-2053" to be "Ready" ...
	I0329 11:26:45.349556   20307 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0329 11:26:45.357634   20307 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-8dqds" in "kube-system" namespace to be "Ready" ...
	I0329 11:26:45.468296   20307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51077 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/custom-weave-20220329110226-2053/id_rsa Username:docker}
	I0329 11:26:45.468358   20307 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0329 11:26:45.511871   20307 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0329 11:26:45.511995   20307 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-weave-20220329110226-2053
	I0329 11:26:45.642285   20307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0329 11:26:45.651316   20307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51077 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/custom-weave-20220329110226-2053/id_rsa Username:docker}
	I0329 11:26:45.845271   20307 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0329 11:26:46.223580   20307 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0329 11:26:46.223607   20307 addons.go:417] enableAddons completed in 1.103454729s
	I0329 11:26:47.370160   20307 pod_ready.go:102] pod "coredns-64897985d-8dqds" in "kube-system" namespace has status "Ready":"False"
	I0329 11:26:49.369969   20307 pod_ready.go:97] error getting pod "coredns-64897985d-8dqds" in "kube-system" namespace (skipping!): pods "coredns-64897985d-8dqds" not found
	I0329 11:26:49.369986   20307 pod_ready.go:81] duration metric: took 4.012342412s waiting for pod "coredns-64897985d-8dqds" in "kube-system" namespace to be "Ready" ...
	E0329 11:26:49.370002   20307 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-64897985d-8dqds" in "kube-system" namespace (skipping!): pods "coredns-64897985d-8dqds" not found
	I0329 11:26:49.370008   20307 pod_ready.go:78] waiting up to 5m0s for pod "coredns-64897985d-mpfg8" in "kube-system" namespace to be "Ready" ...
	I0329 11:26:51.393618   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:26:53.894046   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:26:56.393321   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:26:58.894470   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:00.895233   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:03.394190   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:05.893372   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:08.394270   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:10.893353   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:12.894778   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:15.394712   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:17.892965   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:19.893305   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:22.393716   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:24.395409   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:26.893117   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:28.894920   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:31.393301   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:33.893876   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:35.894879   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:38.396184   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:40.893816   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:42.894090   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:45.395064   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:47.891503   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:49.893718   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:51.894824   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:54.392129   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:56.392168   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:27:58.393121   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:00.394147   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:02.893438   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:04.894706   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:06.895470   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:09.392915   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:11.893069   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:13.893237   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:15.894475   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:18.392100   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:20.394403   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:22.891593   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:24.896137   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:27.395134   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:29.895272   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:32.394800   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:34.893749   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:37.392632   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:39.393276   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:41.393370   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:43.393600   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:45.891251   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:47.894814   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:50.394529   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:52.892013   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:54.894200   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:57.393923   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:28:59.893908   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:02.392206   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:04.393720   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:06.393828   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:08.394050   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:10.893365   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:13.392785   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:15.396028   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:17.893043   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:20.394589   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:22.397896   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:24.893321   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:27.392176   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:29.392939   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:31.395797   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:33.893720   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:36.393774   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:38.393975   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:40.395070   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:42.893694   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:44.894251   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:47.391820   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:49.392740   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:51.894081   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:54.392061   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:56.395249   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:29:58.892029   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:30:00.893408   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:30:03.391990   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:30:05.392947   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:30:07.892275   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:30:10.393279   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:30:12.393669   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:30:14.892889   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:30:17.393494   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:30:19.893907   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:30:22.391285   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:30:24.392658   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:30:26.895158   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:30:28.895896   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:30:31.394199   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:30:33.893598   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:30:35.894026   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:30:38.393454   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:30:40.892292   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:30:42.892491   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:30:45.393040   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:30:47.891564   20307 pod_ready.go:102] pod "coredns-64897985d-mpfg8" in "kube-system" namespace has status "Ready":"False"
	I0329 11:30:49.398332   20307 pod_ready.go:81] duration metric: took 4m0.028844457s waiting for pod "coredns-64897985d-mpfg8" in "kube-system" namespace to be "Ready" ...
	E0329 11:30:49.398345   20307 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0329 11:30:49.398350   20307 pod_ready.go:78] waiting up to 5m0s for pod "etcd-custom-weave-20220329110226-2053" in "kube-system" namespace to be "Ready" ...
	I0329 11:30:49.403888   20307 pod_ready.go:92] pod "etcd-custom-weave-20220329110226-2053" in "kube-system" namespace has status "Ready":"True"
	I0329 11:30:49.403897   20307 pod_ready.go:81] duration metric: took 5.546929ms waiting for pod "etcd-custom-weave-20220329110226-2053" in "kube-system" namespace to be "Ready" ...
	I0329 11:30:49.403906   20307 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-custom-weave-20220329110226-2053" in "kube-system" namespace to be "Ready" ...
	I0329 11:30:49.408940   20307 pod_ready.go:92] pod "kube-apiserver-custom-weave-20220329110226-2053" in "kube-system" namespace has status "Ready":"True"
	I0329 11:30:49.408950   20307 pod_ready.go:81] duration metric: took 5.04231ms waiting for pod "kube-apiserver-custom-weave-20220329110226-2053" in "kube-system" namespace to be "Ready" ...
	I0329 11:30:49.408957   20307 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-custom-weave-20220329110226-2053" in "kube-system" namespace to be "Ready" ...
	I0329 11:30:49.413719   20307 pod_ready.go:92] pod "kube-controller-manager-custom-weave-20220329110226-2053" in "kube-system" namespace has status "Ready":"True"
	I0329 11:30:49.413728   20307 pod_ready.go:81] duration metric: took 4.764573ms waiting for pod "kube-controller-manager-custom-weave-20220329110226-2053" in "kube-system" namespace to be "Ready" ...
	I0329 11:30:49.413735   20307 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-wm4zb" in "kube-system" namespace to be "Ready" ...
	I0329 11:30:49.797381   20307 pod_ready.go:92] pod "kube-proxy-wm4zb" in "kube-system" namespace has status "Ready":"True"
	I0329 11:30:49.797393   20307 pod_ready.go:81] duration metric: took 383.964026ms waiting for pod "kube-proxy-wm4zb" in "kube-system" namespace to be "Ready" ...
	I0329 11:30:49.797401   20307 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-custom-weave-20220329110226-2053" in "kube-system" namespace to be "Ready" ...
	I0329 11:30:50.190053   20307 pod_ready.go:92] pod "kube-scheduler-custom-weave-20220329110226-2053" in "kube-system" namespace has status "Ready":"True"
	I0329 11:30:50.190065   20307 pod_ready.go:81] duration metric: took 392.973906ms waiting for pod "kube-scheduler-custom-weave-20220329110226-2053" in "kube-system" namespace to be "Ready" ...
	I0329 11:30:50.190072   20307 pod_ready.go:78] waiting up to 5m0s for pod "weave-net-kfmqj" in "kube-system" namespace to be "Ready" ...
	I0329 11:30:52.610830   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:30:55.106905   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:30:57.603654   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:00.103648   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:02.106618   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:04.600422   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:06.600120   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:08.600736   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:11.098679   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:13.098540   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:15.098597   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:17.099933   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:19.601227   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:22.097931   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:24.098348   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:26.100936   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:28.595413   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:30.596452   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:33.100825   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:35.595452   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:37.600002   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:39.603709   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:42.098290   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:44.099070   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:46.596946   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:49.097733   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:51.600628   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:54.096483   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:56.096680   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:31:58.595799   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:00.597297   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:03.097579   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:05.598586   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:08.095817   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:10.097082   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:12.097270   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:14.100894   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:16.599281   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:19.095969   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:21.098875   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:23.596642   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:25.599677   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:28.099867   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:30.597399   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:33.097361   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:35.595447   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:38.097170   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:40.595363   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:42.597252   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:45.096782   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:47.597535   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:49.598842   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:52.097463   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:54.598462   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:57.094766   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:32:59.095741   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:01.097601   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:03.602983   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:06.097662   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:08.105835   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:10.595694   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:13.096713   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:15.596189   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:18.095023   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:20.102057   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:22.596995   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:25.097052   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:27.100688   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:29.595437   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:32.096117   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:34.594673   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:37.092087   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:39.092507   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:41.100170   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:43.590802   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:45.594417   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:48.100087   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:50.594269   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:52.598337   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:55.100084   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:57.101307   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:33:59.595032   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:34:01.596744   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:34:04.093478   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:34:06.592067   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:34:08.592314   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:34:10.593258   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:34:13.097505   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:34:15.593822   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:34:18.097207   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:34:20.593393   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:34:23.091883   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:34:25.094371   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:34:27.094554   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:34:29.592201   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:34:31.592885   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:34:33.594900   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:34:36.092613   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:34:38.092858   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:34:40.592843   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:34:43.092974   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:34:45.093406   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:34:47.592258   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:34:49.596253   20307 pod_ready.go:102] pod "weave-net-kfmqj" in "kube-system" namespace has status "Ready":"False"
	I0329 11:34:50.597979   20307 pod_ready.go:81] duration metric: took 4m0.420601729s waiting for pod "weave-net-kfmqj" in "kube-system" namespace to be "Ready" ...
	E0329 11:34:50.597990   20307 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0329 11:34:50.597993   20307 pod_ready.go:38] duration metric: took 8m5.262306193s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0329 11:34:50.598014   20307 api_server.go:51] waiting for apiserver process to appear ...
	I0329 11:34:50.623548   20307 out.go:176] 
	W0329 11:34:50.623637   20307 out.go:241] X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 5m0s for node: wait for apiserver proc: apiserver process never appeared
	W0329 11:34:50.623697   20307 out.go:241] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W0329 11:34:50.623704   20307 out.go:241] * Related issues:
	* Related issues:
	W0329 11:34:50.623732   20307 out.go:241]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W0329 11:34:50.623774   20307 out.go:241]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I0329 11:34:50.649548   20307 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 105
--- FAIL: TestNetworkPlugins/group/custom-weave/Start (550.32s)
E0329 11:54:48.026089    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329110226-2053/client.crt: no such file or directory
E0329 11:55:04.493887    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
E0329 11:55:06.141793    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329110226-2053/client.crt: no such file or directory
E0329 11:55:20.903211    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329110225-2053/client.crt: no such file or directory
E0329 11:55:40.675734    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
E0329 11:55:57.543878    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
E0329 11:56:00.180912    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329110226-2053/client.crt: no such file or directory
E0329 11:56:07.872873    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory

                                                
                                    

Test pass (275/299)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 20.65
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.28
10 TestDownloadOnly/v1.23.5/json-events 8.23
11 TestDownloadOnly/v1.23.5/preload-exists 0
14 TestDownloadOnly/v1.23.5/kubectl 0
15 TestDownloadOnly/v1.23.5/LogsDuration 0.28
17 TestDownloadOnly/v1.23.6-rc.0/json-events 8.36
18 TestDownloadOnly/v1.23.6-rc.0/preload-exists 0
21 TestDownloadOnly/v1.23.6-rc.0/kubectl 0
22 TestDownloadOnly/v1.23.6-rc.0/LogsDuration 0.28
23 TestDownloadOnly/DeleteAll 1.11
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.63
25 TestDownloadOnlyKic 7.38
26 TestBinaryMirror 1.9
27 TestOffline 127.13
29 TestAddons/Setup 142.27
33 TestAddons/parallel/MetricsServer 5.73
34 TestAddons/parallel/HelmTiller 11.57
36 TestAddons/parallel/CSI 42.16
38 TestAddons/serial/GCPAuth 15.32
39 TestAddons/StoppedEnableDisable 18.47
40 TestCertOptions 69.16
41 TestCertExpiration 263.87
42 TestDockerFlags 56.67
43 TestForceSystemdFlag 78.77
44 TestForceSystemdEnv 81.54
46 TestHyperKitDriverInstallOrUpdate 6.57
49 TestErrorSpam/setup 73.01
50 TestErrorSpam/start 2.4
51 TestErrorSpam/status 1.94
52 TestErrorSpam/pause 2.13
53 TestErrorSpam/unpause 2.16
54 TestErrorSpam/stop 18.06
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 123.53
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 7.15
61 TestFunctional/serial/KubeContext 0.04
62 TestFunctional/serial/KubectlGetPods 1.8
65 TestFunctional/serial/CacheCmd/cache/add_remote 5.49
66 TestFunctional/serial/CacheCmd/cache/add_local 2.04
67 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
68 TestFunctional/serial/CacheCmd/cache/list 0.07
69 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.79
70 TestFunctional/serial/CacheCmd/cache/cache_reload 3.31
71 TestFunctional/serial/CacheCmd/cache/delete 0.14
72 TestFunctional/serial/MinikubeKubectlCmd 0.49
73 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.57
76 TestFunctional/serial/LogsCmd 3.04
77 TestFunctional/serial/LogsFileCmd 3.17
79 TestFunctional/parallel/ConfigCmd 0.41
80 TestFunctional/parallel/DashboardCmd 3.22
81 TestFunctional/parallel/DryRun 1.61
82 TestFunctional/parallel/InternationalLanguage 0.64
83 TestFunctional/parallel/StatusCmd 2.31
86 TestFunctional/parallel/ServiceCmd 14.38
88 TestFunctional/parallel/AddonsCmd 0.3
89 TestFunctional/parallel/PersistentVolumeClaim 26.65
91 TestFunctional/parallel/SSHCmd 1.42
92 TestFunctional/parallel/CpCmd 2.62
93 TestFunctional/parallel/MySQL 24.31
94 TestFunctional/parallel/FileSync 0.68
95 TestFunctional/parallel/CertSync 4.2
99 TestFunctional/parallel/NodeLabels 0.05
101 TestFunctional/parallel/NonActiveRuntimeDisabled 0.67
103 TestFunctional/parallel/Version/short 0.1
104 TestFunctional/parallel/Version/components 1.53
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.46
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.47
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.49
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.48
109 TestFunctional/parallel/ImageCommands/ImageBuild 3.41
110 TestFunctional/parallel/ImageCommands/Setup 1.92
111 TestFunctional/parallel/DockerEnv/bash 2.72
112 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.81
113 TestFunctional/parallel/UpdateContextCmd/no_changes 0.43
114 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.98
115 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.36
116 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.97
117 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.54
118 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.33
119 TestFunctional/parallel/ImageCommands/ImageRemove 1.04
120 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.95
121 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.91
122 TestFunctional/parallel/ProfileCmd/profile_not_create 1
123 TestFunctional/parallel/ProfileCmd/profile_list 0.75
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.8
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.22
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 5.25
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
135 TestFunctional/parallel/MountCmd/any-port 8.77
136 TestFunctional/parallel/MountCmd/specific-port 3.75
137 TestFunctional/delete_addon-resizer_images 0.27
138 TestFunctional/delete_my-image_image 0.12
139 TestFunctional/delete_minikube_cached_images 0.12
142 TestIngressAddonLegacy/StartLegacyK8sCluster 137.55
144 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.42
145 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.61
149 TestJSONOutput/start/Command 124.72
150 TestJSONOutput/start/Audit 0
152 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/pause/Command 0.92
156 TestJSONOutput/pause/Audit 0
158 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/unpause/Command 0.76
162 TestJSONOutput/unpause/Audit 0
164 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/stop/Command 17.92
168 TestJSONOutput/stop/Audit 0
170 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
172 TestErrorJSONOutput 0.76
174 TestKicCustomNetwork/create_custom_network 88.18
175 TestKicCustomNetwork/use_default_bridge_network 76.04
176 TestKicExistingNetwork 89.05
177 TestKicCustomSubnet 87.12
178 TestMainNoArgs 0.07
181 TestMountStart/serial/StartWithMountFirst 47.15
182 TestMountStart/serial/VerifyMountFirst 0.61
183 TestMountStart/serial/StartWithMountSecond 47.81
184 TestMountStart/serial/VerifyMountSecond 0.61
185 TestMountStart/serial/DeleteFirst 11.86
186 TestMountStart/serial/VerifyMountPostDelete 0.6
187 TestMountStart/serial/Stop 7.13
188 TestMountStart/serial/RestartStopped 29.47
189 TestMountStart/serial/VerifyMountPostStop 0.6
192 TestMultiNode/serial/FreshStart2Nodes 231.56
193 TestMultiNode/serial/DeployApp2Nodes 5.92
194 TestMultiNode/serial/PingHostFrom2Pods 0.84
195 TestMultiNode/serial/AddNode 109.66
196 TestMultiNode/serial/ProfileList 0.68
197 TestMultiNode/serial/CopyFile 22.7
198 TestMultiNode/serial/StopNode 11.47
199 TestMultiNode/serial/StartAfterStop 50.74
200 TestMultiNode/serial/RestartKeepsNodes 249.9
201 TestMultiNode/serial/DeleteNode 17.14
202 TestMultiNode/serial/StopMultiNode 35.36
203 TestMultiNode/serial/RestartMultiNode 138.33
204 TestMultiNode/serial/ValidateNameConflict 100.65
208 TestPreload 206.77
210 TestScheduledStopUnix 153.77
211 TestSkaffold 120.39
213 TestInsufficientStorage 64.65
214 TestRunningBinaryUpgrade 206.86
216 TestKubernetesUpgrade 158.87
217 TestMissingContainerUpgrade 189.34
229 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 8.47
230 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 11.34
231 TestStoppedBinaryUpgrade/Setup 0.85
232 TestStoppedBinaryUpgrade/Upgrade 151.7
234 TestPause/serial/Start 104.35
235 TestPause/serial/SecondStartNoReconfiguration 7.1
236 TestPause/serial/Pause 0.86
237 TestPause/serial/VerifyStatus 0.64
238 TestPause/serial/Unpause 0.84
239 TestPause/serial/PauseAgain 0.81
240 TestPause/serial/DeletePaused 9.57
241 TestPause/serial/VerifyDeletedResources 1
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.33
251 TestNoKubernetes/serial/StartWithK8s 54.17
252 TestStoppedBinaryUpgrade/MinikubeLogs 2.75
253 TestNetworkPlugins/group/auto/Start 104.61
254 TestNoKubernetes/serial/StartWithStopK8s 26.41
255 TestNoKubernetes/serial/Start 37.99
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.61
257 TestNoKubernetes/serial/ProfileList 2.2
258 TestNoKubernetes/serial/Stop 4.92
259 TestNetworkPlugins/group/auto/KubeletFlags 0.76
260 TestNoKubernetes/serial/StartNoArgs 20.47
261 TestNetworkPlugins/group/auto/NetCatPod 12.01
262 TestNetworkPlugins/group/auto/DNS 0.14
263 TestNetworkPlugins/group/auto/Localhost 0.13
264 TestNetworkPlugins/group/auto/HairPin 5.13
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.6
266 TestNetworkPlugins/group/kindnet/Start 111.66
267 TestNetworkPlugins/group/false/Start 114.08
268 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
269 TestNetworkPlugins/group/kindnet/KubeletFlags 0.64
270 TestNetworkPlugins/group/kindnet/NetCatPod 12.54
271 TestNetworkPlugins/group/false/KubeletFlags 1.03
272 TestNetworkPlugins/group/false/NetCatPod 13.52
273 TestNetworkPlugins/group/kindnet/DNS 0.15
274 TestNetworkPlugins/group/kindnet/Localhost 0.13
275 TestNetworkPlugins/group/kindnet/HairPin 0.13
276 TestNetworkPlugins/group/false/DNS 0.15
277 TestNetworkPlugins/group/false/Localhost 0.13
278 TestNetworkPlugins/group/false/HairPin 5.14
279 TestNetworkPlugins/group/enable-default-cni/Start 106.89
280 TestNetworkPlugins/group/bridge/Start 112.68
281 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.64
282 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.91
283 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
284 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
285 TestNetworkPlugins/group/enable-default-cni/HairPin 2.77
286 TestNetworkPlugins/group/bridge/KubeletFlags 0.66
287 TestNetworkPlugins/group/bridge/NetCatPod 13.98
288 TestNetworkPlugins/group/kubenet/Start 104.52
289 TestNetworkPlugins/group/bridge/DNS 0.18
290 TestNetworkPlugins/group/bridge/Localhost 0.46
291 TestNetworkPlugins/group/bridge/HairPin 0.16
292 TestNetworkPlugins/group/calico/Start 138.88
293 TestNetworkPlugins/group/kubenet/KubeletFlags 0.69
294 TestNetworkPlugins/group/kubenet/NetCatPod 10.99
295 TestNetworkPlugins/group/kubenet/DNS 0.15
296 TestNetworkPlugins/group/kubenet/Localhost 0.14
297 TestNetworkPlugins/group/kubenet/HairPin 0.14
298 TestNetworkPlugins/group/cilium/Start 91.55
299 TestNetworkPlugins/group/calico/ControllerPod 5.02
300 TestNetworkPlugins/group/calico/KubeletFlags 0.65
301 TestNetworkPlugins/group/calico/NetCatPod 12.98
302 TestNetworkPlugins/group/calico/DNS 0.16
303 TestNetworkPlugins/group/calico/Localhost 0.19
304 TestNetworkPlugins/group/calico/HairPin 0.18
306 TestNetworkPlugins/group/cilium/ControllerPod 5.02
307 TestNetworkPlugins/group/cilium/KubeletFlags 0.66
308 TestNetworkPlugins/group/cilium/NetCatPod 12.46
309 TestNetworkPlugins/group/cilium/DNS 0.15
310 TestNetworkPlugins/group/cilium/Localhost 0.15
311 TestNetworkPlugins/group/cilium/HairPin 0.14
313 TestStartStop/group/old-k8s-version/serial/FirstStart 161.04
314 TestStartStop/group/old-k8s-version/serial/DeployApp 10.21
315 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.81
316 TestStartStop/group/old-k8s-version/serial/Stop 18.24
317 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.4
318 TestStartStop/group/old-k8s-version/serial/SecondStart 57.55
319 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 18.02
320 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 7.1
321 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.66
322 TestStartStop/group/old-k8s-version/serial/Pause 4.38
324 TestStartStop/group/no-preload/serial/FirstStart 105
325 TestStartStop/group/no-preload/serial/DeployApp 11.18
326 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.88
327 TestStartStop/group/no-preload/serial/Stop 17.51
328 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.4
329 TestStartStop/group/no-preload/serial/SecondStart 377.39
331 TestStartStop/group/embed-certs/serial/FirstStart 349.58
332 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
333 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.86
334 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.68
335 TestStartStop/group/no-preload/serial/Pause 4.62
337 TestStartStop/group/default-k8s-different-port/serial/FirstStart 335.1
338 TestStartStop/group/embed-certs/serial/DeployApp 11.02
339 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.8
340 TestStartStop/group/embed-certs/serial/Stop 15.95
341 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.43
342 TestStartStop/group/embed-certs/serial/SecondStart 590.99
343 TestStartStop/group/default-k8s-different-port/serial/DeployApp 11.08
344 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.83
345 TestStartStop/group/default-k8s-different-port/serial/Stop 18.23
346 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.4
347 TestStartStop/group/default-k8s-different-port/serial/SecondStart 596.77
348 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
349 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.89
350 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.65
351 TestStartStop/group/embed-certs/serial/Pause 4.42
353 TestStartStop/group/newest-cni/serial/FirstStart 70.46
354 TestStartStop/group/newest-cni/serial/DeployApp 0
355 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.88
356 TestStartStop/group/newest-cni/serial/Stop 19.21
357 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.4
358 TestStartStop/group/newest-cni/serial/SecondStart 57.85
359 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
360 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
361 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.67
362 TestStartStop/group/newest-cni/serial/Pause 4.4
363 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.02
364 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 6.93
365 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.64
366 TestStartStop/group/default-k8s-different-port/serial/Pause 4.32
x
+
TestDownloadOnly/v1.16.0/json-events (20.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220329101104-2053 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:73: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220329101104-2053 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (20.650474135s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (20.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220329101104-2053
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220329101104-2053: exit status 85 (280.637306ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/29 10:11:04
	Running on machine: 37310
	Binary: Built with gc go1.17.7 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0329 10:11:04.069053    2060 out.go:297] Setting OutFile to fd 1 ...
	I0329 10:11:04.069202    2060 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 10:11:04.069207    2060 out.go:310] Setting ErrFile to fd 2...
	I0329 10:11:04.069211    2060 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 10:11:04.069290    2060 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/bin
	W0329 10:11:04.069387    2060 root.go:293] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/config/config.json: no such file or directory
	I0329 10:11:04.069831    2060 out.go:304] Setting JSON to true
	I0329 10:11:04.090024    2060 start.go:114] hostinfo: {"hostname":"37310.local","uptime":639,"bootTime":1648573225,"procs":323,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0329 10:11:04.090114    2060 start.go:122] gopshost.Virtualization returned error: not implemented yet
	I0329 10:11:04.116084    2060 notify.go:193] Checking for updates...
	W0329 10:11:04.116096    2060 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball: no such file or directory
	I0329 10:11:04.141786    2060 driver.go:346] Setting default libvirt URI to qemu:///system
	W0329 10:11:04.230556    2060 docker.go:113] docker version returned error: exit status 1
	I0329 10:11:04.256681    2060 start.go:283] selected driver: docker
	I0329 10:11:04.256697    2060 start.go:800] validating driver "docker" against <nil>
	I0329 10:11:04.256817    2060 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 10:11:04.424764    2060 info.go:263] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/
local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0329 10:11:04.477525    2060 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 10:11:04.643572    2060 info.go:263] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/
local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0329 10:11:04.670226    2060 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0329 10:11:04.724273    2060 start_flags.go:373] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0329 10:11:04.724518    2060 start_flags.go:819] Wait components to verify : map[apiserver:true system_pods:true]
	I0329 10:11:04.724573    2060 cni.go:93] Creating CNI manager for ""
	I0329 10:11:04.724602    2060 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0329 10:11:04.724654    2060 start_flags.go:306] config:
	{Name:download-only-20220329101104-2053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220329101104-2053 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 10:11:04.750912    2060 cache.go:120] Beginning downloading kic base image for docker with docker
	I0329 10:11:04.777078    2060 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0329 10:11:04.777114    2060 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0329 10:11:04.777273    2060 cache.go:107] acquiring lock: {Name:mk6d965b511156bca174e03dcd688874863dab08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 10:11:04.777837    2060 cache.go:107] acquiring lock: {Name:mkce3d46d62e94ca7c4b10356f44e71bae1ab3b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 10:11:04.778217    2060 cache.go:107] acquiring lock: {Name:mk31f0c34d1b62ada39b20aeb415b00a1a2487d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 10:11:04.778290    2060 cache.go:107] acquiring lock: {Name:mk9a75d584a45ebb65dd6b80124fb57265a6cea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 10:11:04.779067    2060 cache.go:107] acquiring lock: {Name:mk0ed1ed1778f40fe6cb4e63db91a4ee7446e2e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 10:11:04.779370    2060 cache.go:107] acquiring lock: {Name:mk3f9c708d6e7452bbd850a8cfc5a3bff687b96f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 10:11:04.779383    2060 cache.go:107] acquiring lock: {Name:mkab65140f5f33c99ccabb58e34d079064d67655 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 10:11:04.779410    2060 cache.go:107] acquiring lock: {Name:mkad0d36251fe7b2bab14f70bfa6eb9e64ee945f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 10:11:04.779416    2060 cache.go:107] acquiring lock: {Name:mk08465359aa6eba2e15d62c7e5eee5507e3ec75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 10:11:04.779416    2060 cache.go:107] acquiring lock: {Name:mk7a3c636e00ab0315ac4e40cdea154108b02820 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0329 10:11:04.779438    2060 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/download-only-20220329101104-2053/config.json ...
	I0329 10:11:04.779489    2060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/download-only-20220329101104-2053/config.json: {Name:mk985a307ce8343bad2a37b2695b317cef0db9ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0329 10:11:04.779770    2060 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.16.0
	I0329 10:11:04.779783    2060 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.16.0
	I0329 10:11:04.779789    2060 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.16.0
	I0329 10:11:04.779794    2060 image.go:134] retrieving image: k8s.gcr.io/etcd:3.3.15-0
	I0329 10:11:04.779805    2060 image.go:134] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.7
	I0329 10:11:04.779924    2060 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0329 10:11:04.779932    2060 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.2
	I0329 10:11:04.780078    2060 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.16.0
	I0329 10:11:04.780114    2060 image.go:134] retrieving image: docker.io/kubernetesui/dashboard:v2.3.1
	I0329 10:11:04.780281    2060 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0329 10:11:04.780344    2060 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0329 10:11:04.780875    2060 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/linux/amd64/v1.16.0/kubeadm
	I0329 10:11:04.780881    2060 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/linux/amd64/v1.16.0/kubectl
	I0329 10:11:04.780875    2060 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/linux/amd64/v1.16.0/kubelet
	I0329 10:11:04.782313    2060 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0329 10:11:04.782752    2060 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.3.15-0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0329 10:11:04.783992    2060 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0329 10:11:04.784824    2060 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0329 10:11:04.785131    2060 image.go:180] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0329 10:11:04.785149    2060 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0329 10:11:04.785169    2060 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.6.2: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0329 10:11:04.785201    2060 image.go:180] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0329 10:11:04.785232    2060 image.go:180] daemon lookup for docker.io/kubernetesui/dashboard:v2.3.1: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0329 10:11:04.785456    2060 image.go:180] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0329 10:11:04.892626    2060 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 to local cache
	I0329 10:11:04.892814    2060 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local cache directory
	I0329 10:11:04.892956    2060 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 to local cache
	I0329 10:11:05.443612    2060 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0
	I0329 10:11:05.516736    2060 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0
	I0329 10:11:05.517779    2060 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0329 10:11:05.539783    2060 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0
	I0329 10:11:05.567661    2060 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0
	I0329 10:11:05.570290    2060 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0
	I0329 10:11:05.658924    2060 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0329 10:11:05.677749    2060 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2
	I0329 10:11:05.684764    2060 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
	I0329 10:11:05.684783    2060 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 907.466702ms
	I0329 10:11:05.684796    2060 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
	I0329 10:11:05.922571    2060 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1
	I0329 10:11:05.927881    2060 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7
	I0329 10:11:07.147766    2060 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists
	I0329 10:11:07.147785    2060 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 2.370464433s
	I0329 10:11:07.147796    2060 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded
	I0329 10:11:08.665251    2060 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0329 10:11:08.665274    2060 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 3.887043634s
	I0329 10:11:08.665286    2060 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0329 10:11:08.733509    2060 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 exists
	I0329 10:11:08.733530    2060 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1" took 3.954693855s
	I0329 10:11:08.733538    2060 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/docker.io/kubernetesui/dashboard_v2.3.1 succeeded
	I0329 10:11:08.734418    2060 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	I0329 10:11:09.136609    2060 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 exists
	I0329 10:11:09.136630    2060 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.2" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2" took 4.359202026s
	I0329 10:11:09.136640    2060 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 succeeded
	I0329 10:11:10.486213    2060 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 exists
	I0329 10:11:10.486233    2060 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0" took 5.707712017s
	I0329 10:11:10.486248    2060 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 succeeded
	I0329 10:11:10.690653    2060 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 exists
	I0329 10:11:10.690682    2060 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0" took 5.911987091s
	I0329 10:11:10.690691    2060 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 succeeded
	I0329 10:11:11.002636    2060 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 exists
	I0329 10:11:11.002655    2060 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0" took 6.223142949s
	I0329 10:11:11.002669    2060 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 succeeded
	I0329 10:11:11.177707    2060 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 exists
	I0329 10:11:11.177725    2060 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0" took 6.400281103s
	I0329 10:11:11.177734    2060 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 succeeded
	I0329 10:11:11.408535    2060 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 exists
	I0329 10:11:11.408558    2060 cache.go:96] cache image "k8s.gcr.io/etcd:3.3.15-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0" took 6.63011039s
	I0329 10:11:11.408566    2060 cache.go:80] save to tar file k8s.gcr.io/etcd:3.3.15-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 succeeded
	I0329 10:11:11.408582    2060 cache.go:87] Successfully saved all images to host disk.
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220329101104-2053"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/json-events (8.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220329101104-2053 --force --alsologtostderr --kubernetes-version=v1.23.5 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:73: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220329101104-2053 --force --alsologtostderr --kubernetes-version=v1.23.5 --container-runtime=docker --driver=docker : (8.229533266s)
--- PASS: TestDownloadOnly/v1.23.5/json-events (8.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/preload-exists
--- PASS: TestDownloadOnly/v1.23.5/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/kubectl
--- PASS: TestDownloadOnly/v1.23.5/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220329101104-2053
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220329101104-2053: exit status 85 (280.581582ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/29 10:11:32
	Running on machine: 37310
	Binary: Built with gc go1.17.7 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0329 10:11:32.534974    2123 out.go:297] Setting OutFile to fd 1 ...
	I0329 10:11:32.535138    2123 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 10:11:32.535143    2123 out.go:310] Setting ErrFile to fd 2...
	I0329 10:11:32.535146    2123 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 10:11:32.535226    2123 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/bin
	W0329 10:11:32.535321    2123 root.go:293] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/config/config.json: no such file or directory
	I0329 10:11:32.535481    2123 out.go:304] Setting JSON to true
	I0329 10:11:32.550743    2123 start.go:114] hostinfo: {"hostname":"37310.local","uptime":667,"bootTime":1648573225,"procs":318,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0329 10:11:32.550836    2123 start.go:122] gopshost.Virtualization returned error: not implemented yet
	W0329 10:11:32.579871    2123 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball: no such file or directory
	I0329 10:11:32.579902    2123 notify.go:193] Checking for updates...
	I0329 10:11:32.605920    2123 config.go:176] Loaded profile config "download-only-20220329101104-2053": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0329 10:11:32.605998    2123 start.go:708] api.Load failed for download-only-20220329101104-2053: filestore "download-only-20220329101104-2053": Docker machine "download-only-20220329101104-2053" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0329 10:11:32.606059    2123 driver.go:346] Setting default libvirt URI to qemu:///system
	W0329 10:11:32.606090    2123 start.go:708] api.Load failed for download-only-20220329101104-2053: filestore "download-only-20220329101104-2053": Docker machine "download-only-20220329101104-2053" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0329 10:11:32.700809    2123 docker.go:137] docker version: linux-20.10.6
	I0329 10:11:32.700908    2123 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 10:11:32.876105    2123 info.go:263] docker info: {ID:4GJZ:5WQJ:PTOH:OBGV:UGLB:2QMR:SRUC:WPW4:I7LT:V2VN:S3VH:GWN3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:44 SystemTime:2022-03-29 17:11:32.815821368 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0329 10:11:32.901795    2123 start.go:283] selected driver: docker
	I0329 10:11:32.901811    2123 start.go:800] validating driver "docker" against &{Name:download-only-20220329101104-2053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220329101104-2053 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 10:11:32.902237    2123 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 10:11:33.089572    2123 info.go:263] docker info: {ID:4GJZ:5WQJ:PTOH:OBGV:UGLB:2QMR:SRUC:WPW4:I7LT:V2VN:S3VH:GWN3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:44 SystemTime:2022-03-29 17:11:33.021554191 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0329 10:11:33.092518    2123 cni.go:93] Creating CNI manager for ""
	I0329 10:11:33.092537    2123 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0329 10:11:33.092558    2123 start_flags.go:306] config:
	{Name:download-only-20220329101104-2053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:download-only-20220329101104-2053 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 10:11:33.119213    2123 cache.go:120] Beginning downloading kic base image for docker with docker
	I0329 10:11:33.145272    2123 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0329 10:11:33.145278    2123 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 10:11:33.220701    2123 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.5/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4
	I0329 10:11:33.220742    2123 cache.go:57] Caching tarball of preloaded images
	I0329 10:11:33.221028    2123 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime docker
	I0329 10:11:33.247919    2123 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4 ...
	I0329 10:11:33.287307    2123 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0329 10:11:33.287325    2123 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0329 10:11:33.343183    2123 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.5/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4?checksum=md5:b4b3d1771f6a934557953d7b31a587d4 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.5-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220329101104-2053"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.5/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/json-events (8.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220329101104-2053 --force --alsologtostderr --kubernetes-version=v1.23.6-rc.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:73: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220329101104-2053 --force --alsologtostderr --kubernetes-version=v1.23.6-rc.0 --container-runtime=docker --driver=docker : (8.358245447s)
--- PASS: TestDownloadOnly/v1.23.6-rc.0/json-events (8.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.23.6-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.23.6-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220329101104-2053
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220329101104-2053: exit status 85 (276.468039ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/03/29 10:11:41
	Running on machine: 37310
	Binary: Built with gc go1.17.7 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0329 10:11:41.046389    2159 out.go:297] Setting OutFile to fd 1 ...
	I0329 10:11:41.046719    2159 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 10:11:41.046725    2159 out.go:310] Setting ErrFile to fd 2...
	I0329 10:11:41.046729    2159 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 10:11:41.046912    2159 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/bin
	W0329 10:11:41.047136    2159 root.go:293] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/config/config.json: no such file or directory
	I0329 10:11:41.047396    2159 out.go:304] Setting JSON to true
	I0329 10:11:41.062365    2159 start.go:114] hostinfo: {"hostname":"37310.local","uptime":676,"bootTime":1648573225,"procs":326,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0329 10:11:41.062471    2159 start.go:122] gopshost.Virtualization returned error: not implemented yet
	I0329 10:11:41.088748    2159 notify.go:193] Checking for updates...
	I0329 10:11:41.115309    2159 config.go:176] Loaded profile config "download-only-20220329101104-2053": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	W0329 10:11:41.115373    2159 start.go:708] api.Load failed for download-only-20220329101104-2053: filestore "download-only-20220329101104-2053": Docker machine "download-only-20220329101104-2053" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0329 10:11:41.115438    2159 driver.go:346] Setting default libvirt URI to qemu:///system
	W0329 10:11:41.115467    2159 start.go:708] api.Load failed for download-only-20220329101104-2053: filestore "download-only-20220329101104-2053": Docker machine "download-only-20220329101104-2053" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0329 10:11:41.208077    2159 docker.go:137] docker version: linux-20.10.6
	I0329 10:11:41.208204    2159 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 10:11:41.381373    2159 info.go:263] docker info: {ID:4GJZ:5WQJ:PTOH:OBGV:UGLB:2QMR:SRUC:WPW4:I7LT:V2VN:S3VH:GWN3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:45 SystemTime:2022-03-29 17:11:41.327846854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0329 10:11:41.408322    2159 start.go:283] selected driver: docker
	I0329 10:11:41.408365    2159 start.go:800] validating driver "docker" against &{Name:download-only-20220329101104-2053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:download-only-20220329101104-2053 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 10:11:41.408849    2159 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 10:11:41.580885    2159 info.go:263] docker info: {ID:4GJZ:5WQJ:PTOH:OBGV:UGLB:2QMR:SRUC:WPW4:I7LT:V2VN:S3VH:GWN3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:45 SystemTime:2022-03-29 17:11:41.527889436 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerA
ddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=secc
omp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0329 10:11:41.582836    2159 cni.go:93] Creating CNI manager for ""
	I0329 10:11:41.582851    2159 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0329 10:11:41.582863    2159 start_flags.go:306] config:
	{Name:download-only-20220329101104-2053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:download-only-20220329101104-2053 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 10:11:41.609604    2159 cache.go:120] Beginning downloading kic base image for docker with docker
	I0329 10:11:41.635403    2159 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0329 10:11:41.635426    2159 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime docker
	I0329 10:11:41.704831    2159 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.6-rc.0/preloaded-images-k8s-v17-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4
	I0329 10:11:41.704857    2159 cache.go:57] Caching tarball of preloaded images
	I0329 10:11:41.705044    2159 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime docker
	I0329 10:11:41.731352    2159 preload.go:238] getting checksum for preloaded-images-k8s-v17-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0329 10:11:41.770963    2159 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0329 10:11:41.770979    2159 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0329 10:11:41.860745    2159 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.23.6-rc.0/preloaded-images-k8s-v17-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:d90e40f602d4362984725b3ec643bc0d -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.6-rc.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220329101104-2053"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.6-rc.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (1.11s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:193: (dbg) Run:  out/minikube-darwin-amd64 delete --all
aaa_download_only_test.go:193: (dbg) Done: out/minikube-darwin-amd64 delete --all: (1.107891922s)
--- PASS: TestDownloadOnly/DeleteAll (1.11s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.63s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-20220329101104-2053
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.63s)

                                                
                                    
x
+
TestDownloadOnlyKic (7.38s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-20220329101152-2053 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:230: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-20220329101152-2053 --force --alsologtostderr --driver=docker : (5.821894603s)
helpers_test.go:176: Cleaning up "download-docker-20220329101152-2053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-20220329101152-2053
--- PASS: TestDownloadOnlyKic (7.38s)

                                                
                                    
x
+
TestBinaryMirror (1.9s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:316: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220329101159-2053 --alsologtostderr --binary-mirror http://127.0.0.1:49810 --driver=docker 
helpers_test.go:176: Cleaning up "binary-mirror-20220329101159-2053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-20220329101159-2053
--- PASS: TestBinaryMirror (1.90s)

                                                
                                    
x
+
TestOffline (127.13s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-20220329110225-2053 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-20220329110225-2053 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (1m49.938222565s)
helpers_test.go:176: Cleaning up "offline-docker-20220329110225-2053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-20220329110225-2053
E0329 11:04:23.418155    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-20220329110225-2053: (17.188608689s)
--- PASS: TestOffline (127.13s)

                                                
                                    
x
+
TestAddons/Setup (142.27s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-20220329101201-2053 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-darwin-amd64 start -p addons-20220329101201-2053 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m22.271910616s)
--- PASS: TestAddons/Setup (142.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:358: metrics-server stabilized in 1.968388ms
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:343: "metrics-server-bd6f4dd56-kb6r7" [7800d392-c2d7-461a-a39a-71f895ddf01c] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006578379s
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220329101201-2053 top pods -n kube-system
addons_test.go:383: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220329101201-2053 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.73s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.57s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:407: tiller-deploy stabilized in 13.734143ms
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:343: "tiller-deploy-6d67d5465d-5tnsm" [d9f05cf3-939a-4be2-973c-6f411de3f7bb] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009571817s
addons_test.go:424: (dbg) Run:  kubectl --context addons-20220329101201-2053 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:424: (dbg) Done: kubectl --context addons-20220329101201-2053 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.988262071s)
addons_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220329101201-2053 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.57s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.16s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:512: csi-hostpath-driver pods stabilized in 5.214808ms
addons_test.go:515: (dbg) Run:  kubectl --context addons-20220329101201-2053 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:520: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20220329101201-2053 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:525: (dbg) Run:  kubectl --context addons-20220329101201-2053 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:530: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [908cec70-4a8c-43ba-9949-9528cd66ae5b] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [908cec70-4a8c-43ba-9949-9528cd66ae5b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [908cec70-4a8c-43ba-9949-9528cd66ae5b] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:530: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.008037254s
addons_test.go:535: (dbg) Run:  kubectl --context addons-20220329101201-2053 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:540: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20220329101201-2053 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20220329101201-2053 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:545: (dbg) Run:  kubectl --context addons-20220329101201-2053 delete pod task-pv-pod
addons_test.go:551: (dbg) Run:  kubectl --context addons-20220329101201-2053 delete pvc hpvc
addons_test.go:557: (dbg) Run:  kubectl --context addons-20220329101201-2053 create -f testdata/csi-hostpath-driver/pvc-restore.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:562: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20220329101201-2053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:567: (dbg) Run:  kubectl --context addons-20220329101201-2053 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:572: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [15861507-f2fd-49dc-b064-dfbfb79881f0] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod-restore" [15861507-f2fd-49dc-b064-dfbfb79881f0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:343: "task-pv-pod-restore" [15861507-f2fd-49dc-b064-dfbfb79881f0] Running
addons_test.go:572: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 13.010986974s
addons_test.go:577: (dbg) Run:  kubectl --context addons-20220329101201-2053 delete pod task-pv-pod-restore
addons_test.go:581: (dbg) Run:  kubectl --context addons-20220329101201-2053 delete pvc hpvc-restore
addons_test.go:585: (dbg) Run:  kubectl --context addons-20220329101201-2053 delete volumesnapshot new-snapshot-demo
addons_test.go:589: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220329101201-2053 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:589: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220329101201-2053 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.880434976s)
addons_test.go:593: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220329101201-2053 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (42.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (15.32s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:604: (dbg) Run:  kubectl --context addons-20220329101201-2053 create -f testdata/busybox.yaml
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [1242015a-c8f6-474a-b2a7-7853dcea2809] Pending
helpers_test.go:343: "busybox" [1242015a-c8f6-474a-b2a7-7853dcea2809] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [1242015a-c8f6-474a-b2a7-7853dcea2809] Running
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 8.008207287s
addons_test.go:616: (dbg) Run:  kubectl --context addons-20220329101201-2053 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:629: (dbg) Run:  kubectl --context addons-20220329101201-2053 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:653: (dbg) Run:  kubectl --context addons-20220329101201-2053 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:666: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220329101201-2053 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:666: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220329101201-2053 addons disable gcp-auth --alsologtostderr -v=1: (6.727923608s)
--- PASS: TestAddons/serial/GCPAuth (15.32s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.47s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-20220329101201-2053
addons_test.go:133: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-20220329101201-2053: (18.02683387s)
addons_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-20220329101201-2053
addons_test.go:141: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-20220329101201-2053
--- PASS: TestAddons/StoppedEnableDisable (18.47s)

                                                
                                    
x
+
TestCertOptions (69.16s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-20220329110533-2053 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
E0329 11:05:57.321967    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
E0329 11:05:57.329096    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
E0329 11:05:57.339966    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
E0329 11:05:57.367600    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
E0329 11:05:57.417542    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
E0329 11:05:57.500838    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
E0329 11:05:57.661016    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
E0329 11:05:57.984257    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
E0329 11:05:58.634048    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
E0329 11:05:59.917594    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
E0329 11:06:02.485282    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
E0329 11:06:07.609570    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
E0329 11:06:07.660468    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
E0329 11:06:17.859082    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-20220329110533-2053 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (1m0.708542804s)
cert_options_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-20220329110533-2053 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-20220329110533-2053 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-20220329110533-2053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-20220329110533-2053
E0329 11:06:38.345182    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-20220329110533-2053: (6.982609084s)
--- PASS: TestCertOptions (69.16s)

                                                
                                    
x
+
TestCertExpiration (263.87s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220329110529-2053 --memory=2048 --cert-expiration=3m --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220329110529-2053 --memory=2048 --cert-expiration=3m --driver=docker : (1m1.944462415s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220329110529-2053 --memory=2048 --cert-expiration=8760h --driver=docker 
cert_options_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220329110529-2053 --memory=2048 --cert-expiration=8760h --driver=docker : (6.803828242s)
helpers_test.go:176: Cleaning up "cert-expiration-20220329110529-2053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-20220329110529-2053
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-20220329110529-2053: (15.121854242s)
--- PASS: TestCertExpiration (263.87s)

                                                
                                    
x
+
TestDockerFlags (56.67s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-20220329110432-2053 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E0329 11:05:04.268151    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
docker_test.go:46: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-20220329110432-2053 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (40.005113155s)
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220329110432-2053 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:62: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220329110432-2053 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-20220329110432-2053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-20220329110432-2053

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-20220329110432-2053: (15.133919316s)
--- PASS: TestDockerFlags (56.67s)

                                                
                                    
x
+
TestForceSystemdFlag (78.77s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-20220329110414-2053 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-20220329110414-2053 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (1m1.688430329s)
docker_test.go:105: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-20220329110414-2053 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-flag-20220329110414-2053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-20220329110414-2053

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-20220329110414-2053: (16.330057099s)
--- PASS: TestForceSystemdFlag (78.77s)

                                                
                                    
x
+
TestForceSystemdEnv (81.54s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-20220329110252-2053 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E0329 11:03:07.434532    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
docker_test.go:151: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-20220329110252-2053 --memory=2048 --alsologtostderr -v=5 --driver=docker : (1m5.520484675s)
docker_test.go:105: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-20220329110252-2053 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-env-20220329110252-2053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-20220329110252-2053
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-20220329110252-2053: (15.307818566s)
--- PASS: TestForceSystemdEnv (81.54s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.57s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (6.57s)

                                                
                                    
x
+
TestErrorSpam/setup (73.01s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-20220329101558-2053 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 --driver=docker 
error_spam_test.go:79: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-20220329101558-2053 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 --driver=docker : (1m13.007240408s)
error_spam_test.go:89: acceptable stderr: "! /usr/local/bin/kubectl is version 1.19.7, which may have incompatibilites with Kubernetes 1.23.5."
--- PASS: TestErrorSpam/setup (73.01s)

                                                
                                    
x
+
TestErrorSpam/start (2.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:214: Cleaning up 1 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220329101558-2053 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 start --dry-run
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220329101558-2053 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 start --dry-run
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220329101558-2053 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 start --dry-run
--- PASS: TestErrorSpam/start (2.40s)

                                                
                                    
x
+
TestErrorSpam/status (1.94s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220329101558-2053 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 status
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220329101558-2053 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 status
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220329101558-2053 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 status
--- PASS: TestErrorSpam/status (1.94s)

                                                
                                    
x
+
TestErrorSpam/pause (2.13s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220329101558-2053 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 pause
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220329101558-2053 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 pause
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220329101558-2053 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 pause
--- PASS: TestErrorSpam/pause (2.13s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.16s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220329101558-2053 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 unpause
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220329101558-2053 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 unpause
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220329101558-2053 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 unpause
--- PASS: TestErrorSpam/unpause (2.16s)

                                                
                                    
x
+
TestErrorSpam/stop (18.06s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220329101558-2053 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 stop
error_spam_test.go:157: (dbg) Done: out/minikube-darwin-amd64 -p nospam-20220329101558-2053 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 stop: (17.320213476s)
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220329101558-2053 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 stop
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220329101558-2053 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-20220329101558-2053 stop
--- PASS: TestErrorSpam/stop (18.06s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1796: local sync path: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/files/etc/test/nested/copy/2053/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (123.53s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2178: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220329101744-2053 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
E0329 10:19:23.472095    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
E0329 10:19:23.478842    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
E0329 10:19:23.488995    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
E0329 10:19:23.510242    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
E0329 10:19:23.550336    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
E0329 10:19:23.630664    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
E0329 10:19:23.795840    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
E0329 10:19:24.116838    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
E0329 10:19:24.759183    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
E0329 10:19:26.040317    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
E0329 10:19:28.601333    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
E0329 10:19:33.721912    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
E0329 10:19:43.970067    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
functional_test.go:2178: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220329101744-2053 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (2m3.530041791s)
--- PASS: TestFunctional/serial/StartWithProxy (123.53s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.15s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:656: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220329101744-2053 --alsologtostderr -v=8
functional_test.go:656: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220329101744-2053 --alsologtostderr -v=8: (7.150117155s)
functional_test.go:660: soft start took 7.150625427s for "functional-20220329101744-2053" cluster.
--- PASS: TestFunctional/serial/SoftStart (7.15s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:678: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (1.8s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:693: (dbg) Run:  kubectl --context functional-20220329101744-2053 get po -A
functional_test.go:693: (dbg) Done: kubectl --context functional-20220329101744-2053 get po -A: (1.803839643s)
--- PASS: TestFunctional/serial/KubectlGetPods (1.80s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 cache add k8s.gcr.io/pause:3.1
functional_test.go:1046: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220329101744-2053 cache add k8s.gcr.io/pause:3.1: (1.313810597s)
functional_test.go:1046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 cache add k8s.gcr.io/pause:3.3
functional_test.go:1046: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220329101744-2053 cache add k8s.gcr.io/pause:3.3: (2.213079605s)
functional_test.go:1046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 cache add k8s.gcr.io/pause:latest
functional_test.go:1046: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220329101744-2053 cache add k8s.gcr.io/pause:latest: (1.963739176s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220329101744-2053 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/functional-20220329101744-20532149638089
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 cache add minikube-local-cache-test:functional-20220329101744-2053
functional_test.go:1089: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220329101744-2053 cache add minikube-local-cache-test:functional-20220329101744-2053: (1.447040357s)
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 cache delete minikube-local-cache-test:functional-20220329101744-2053
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220329101744-2053
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-amd64 cache list
E0329 10:20:04.451311    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (3.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (612.160757ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220329101744-2053 cache reload: (1.387737726s)
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (3.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:713: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 kubectl -- --context functional-20220329101744-2053 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.49s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.57s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:738: (dbg) Run:  out/kubectl --context functional-20220329101744-2053 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.57s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.04s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 logs
functional_test.go:1236: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220329101744-2053 logs: (3.041451299s)
--- PASS: TestFunctional/serial/LogsCmd (3.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1253: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/functional-20220329101744-20531641439954/logs.txt
functional_test.go:1253: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220329101744-2053 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/functional-20220329101744-20531641439954/logs.txt: (3.172242331s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.17s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220329101744-2053 config get cpus: exit status 14 (44.667445ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220329101744-2053 config get cpus: exit status 14 (44.951257ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (3.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:902: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220329101744-2053 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:907: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220329101744-2053 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to kill pid 5222: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (3.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:971: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220329101744-2053 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:971: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220329101744-2053 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (728.805298ms)

                                                
                                                
-- stdout --
	* [functional-20220329101744-2053] minikube v1.25.2 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13730
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0329 10:22:08.566329    5097 out.go:297] Setting OutFile to fd 1 ...
	I0329 10:22:08.566484    5097 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 10:22:08.566489    5097 out.go:310] Setting ErrFile to fd 2...
	I0329 10:22:08.566493    5097 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 10:22:08.566573    5097 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/bin
	I0329 10:22:08.566830    5097 out.go:304] Setting JSON to false
	I0329 10:22:08.582850    5097 start.go:114] hostinfo: {"hostname":"37310.local","uptime":1303,"bootTime":1648573225,"procs":324,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0329 10:22:08.582942    5097 start.go:122] gopshost.Virtualization returned error: not implemented yet
	I0329 10:22:08.608705    5097 out.go:176] * [functional-20220329101744-2053] minikube v1.25.2 on Darwin 11.2.3
	I0329 10:22:08.655653    5097 out.go:176]   - MINIKUBE_LOCATION=13730
	I0329 10:22:08.681619    5097 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 10:22:08.707729    5097 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0329 10:22:08.733689    5097 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0329 10:22:08.759493    5097 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	I0329 10:22:08.759864    5097 config.go:176] Loaded profile config "functional-20220329101744-2053": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 10:22:08.760195    5097 driver.go:346] Setting default libvirt URI to qemu:///system
	I0329 10:22:08.916650    5097 docker.go:137] docker version: linux-20.10.6
	I0329 10:22:08.916791    5097 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 10:22:09.125072    5097 info.go:263] docker info: {ID:4GJZ:5WQJ:PTOH:OBGV:UGLB:2QMR:SRUC:WPW4:I7LT:V2VN:S3VH:GWN3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:51 SystemTime:2022-03-29 17:22:09.056190012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0329 10:22:09.172917    5097 out.go:176] * Using the docker driver based on existing profile
	I0329 10:22:09.172982    5097 start.go:283] selected driver: docker
	I0329 10:22:09.172992    5097 start.go:800] validating driver "docker" against &{Name:functional-20220329101744-2053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220329101744-2053 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false
registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 10:22:09.173153    5097 start.go:811] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0329 10:22:09.201579    5097 out.go:176] 
	W0329 10:22:09.201693    5097 out.go:241] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0329 10:22:09.227752    5097 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:988: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220329101744-2053 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1017: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220329101744-2053 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1017: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220329101744-2053 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (641.333628ms)

                                                
                                                
-- stdout --
	* [functional-20220329101744-2053] minikube v1.25.2 sur Darwin 11.2.3
	  - MINIKUBE_LOCATION=13730
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0329 10:21:58.088618    4750 out.go:297] Setting OutFile to fd 1 ...
	I0329 10:21:58.088834    4750 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 10:21:58.088839    4750 out.go:310] Setting ErrFile to fd 2...
	I0329 10:21:58.088842    4750 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 10:21:58.089115    4750 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/bin
	I0329 10:21:58.089474    4750 out.go:304] Setting JSON to false
	I0329 10:21:58.105827    4750 start.go:114] hostinfo: {"hostname":"37310.local","uptime":1293,"bootTime":1648573225,"procs":333,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.2.3","kernelVersion":"20.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0329 10:21:58.105935    4750 start.go:122] gopshost.Virtualization returned error: not implemented yet
	I0329 10:21:58.132380    4750 out.go:176] * [functional-20220329101744-2053] minikube v1.25.2 sur Darwin 11.2.3
	I0329 10:21:58.179868    4750 out.go:176]   - MINIKUBE_LOCATION=13730
	I0329 10:21:58.207093    4750 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	I0329 10:21:58.234069    4750 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0329 10:21:58.259880    4750 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0329 10:21:58.286018    4750 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	I0329 10:21:58.286750    4750 config.go:176] Loaded profile config "functional-20220329101744-2053": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 10:21:58.287391    4750 driver.go:346] Setting default libvirt URI to qemu:///system
	I0329 10:21:58.389070    4750 docker.go:137] docker version: linux-20.10.6
	I0329 10:21:58.389203    4750 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0329 10:21:58.579892    4750 info.go:263] docker info: {ID:4GJZ:5WQJ:PTOH:OBGV:UGLB:2QMR:SRUC:WPW4:I7LT:V2VN:S3VH:GWN3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:51 SystemTime:2022-03-29 17:21:58.514917603 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=sec
comp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.0.0-beta.1] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:<nil>}}
	I0329 10:21:58.606754    4750 out.go:176] * Utilisation du pilote docker basé sur le profil existant
	I0329 10:21:58.606771    4750 start.go:283] selected driver: docker
	I0329 10:21:58.606779    4750 start.go:800] validating driver "docker" against &{Name:functional-20220329101744-2053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220329101744-2053 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false
registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0329 10:21:58.606850    4750 start.go:811] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0329 10:21:58.635223    4750 out.go:176] 
	W0329 10:21:58.635370    4750 out.go:241] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0329 10:21:58.661456    4750 out.go:176] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:851: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:857: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (14.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) Run:  kubectl --context functional-20220329101744-2053 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1449: (dbg) Run:  kubectl --context functional-20220329101744-2053 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1454: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-54fbb85-glf9q" [26dd9198-11bd-43ce-932d-4846228eada7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:343: "hello-node-54fbb85-glf9q" [26dd9198-11bd-43ce-932d-4846228eada7] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1454: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 6.008266133s
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1459: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220329101744-2053 service list: (1.450434396s)
functional_test.go:1473: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1473: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220329101744-2053 service --namespace=default --https --url hello-node: (2.410626594s)
functional_test.go:1486: found endpoint: https://127.0.0.1:54854
functional_test.go:1501: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1501: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220329101744-2053 service hello-node --url --format={{.IP}}: (2.192778685s)
functional_test.go:1515: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1515: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220329101744-2053 service hello-node --url: (2.197087332s)
functional_test.go:1521: found endpoint for hello-node: http://127.0.0.1:55042
--- PASS: TestFunctional/parallel/ServiceCmd (14.38s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1630: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 addons list
functional_test.go:1642: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [e52ce334-96a3-4dfb-9e1d-9977c6a2572d] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.01652762s
functional_test_pvc_test.go:50: (dbg) Run:  kubectl --context functional-20220329101744-2053 get storageclass -o=json
functional_test_pvc_test.go:70: (dbg) Run:  kubectl --context functional-20220329101744-2053 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220329101744-2053 get pvc myclaim -o=json
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20220329101744-2053 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [37e90745-5930-41db-9ed3-a3b044304ee3] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [37e90745-5930-41db-9ed3-a3b044304ee3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [37e90745-5930-41db-9ed3-a3b044304ee3] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.009569218s
functional_test_pvc_test.go:101: (dbg) Run:  kubectl --context functional-20220329101744-2053 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:107: (dbg) Run:  kubectl --context functional-20220329101744-2053 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20220329101744-2053 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [9a4cc1b9-a10c-4202-8a53-48242921a9c9] Pending
helpers_test.go:343: "sp-pod" [9a4cc1b9-a10c-4202-8a53-48242921a9c9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [9a4cc1b9-a10c-4202-8a53-48242921a9c9] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.006966518s
functional_test_pvc_test.go:115: (dbg) Run:  kubectl --context functional-20220329101744-2053 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.65s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1665: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1682: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh -n functional-20220329101744-2053 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 cp functional-20220329101744-2053:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mk_test2199232488/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh -n functional-20220329101744-2053 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-20220329101744-2053 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1740: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-b87c45988-lwqr2" [04e0ad15-5930-41b2-8fa7-d2bb3d1e920e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-b87c45988-lwqr2" [04e0ad15-5930-41b2-8fa7-d2bb3d1e920e] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1740: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.019769432s
functional_test.go:1748: (dbg) Run:  kubectl --context functional-20220329101744-2053 exec mysql-b87c45988-lwqr2 -- mysql -ppassword -e "show databases;"
functional_test.go:1748: (dbg) Non-zero exit: kubectl --context functional-20220329101744-2053 exec mysql-b87c45988-lwqr2 -- mysql -ppassword -e "show databases;": exit status 1 (157.086334ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1748: (dbg) Run:  kubectl --context functional-20220329101744-2053 exec mysql-b87c45988-lwqr2 -- mysql -ppassword -e "show databases;"
functional_test.go:1748: (dbg) Non-zero exit: kubectl --context functional-20220329101744-2053 exec mysql-b87c45988-lwqr2 -- mysql -ppassword -e "show databases;": exit status 1 (161.816817ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1748: (dbg) Run:  kubectl --context functional-20220329101744-2053 exec mysql-b87c45988-lwqr2 -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1748: (dbg) Non-zero exit: kubectl --context functional-20220329101744-2053 exec mysql-b87c45988-lwqr2 -- mysql -ppassword -e "show databases;": exit status 1 (178.966905ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1748: (dbg) Run:  kubectl --context functional-20220329101744-2053 exec mysql-b87c45988-lwqr2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1870: Checking for existence of /etc/test/nested/copy/2053/hosts within VM
functional_test.go:1872: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh "sudo cat /etc/test/nested/copy/2053/hosts"
functional_test.go:1877: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (4.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1913: Checking for existence of /etc/ssl/certs/2053.pem within VM
functional_test.go:1914: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh "sudo cat /etc/ssl/certs/2053.pem"
functional_test.go:1913: Checking for existence of /usr/share/ca-certificates/2053.pem within VM
functional_test.go:1914: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh "sudo cat /usr/share/ca-certificates/2053.pem"
functional_test.go:1913: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1914: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1940: Checking for existence of /etc/ssl/certs/20532.pem within VM
functional_test.go:1941: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh "sudo cat /etc/ssl/certs/20532.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1940: Checking for existence of /usr/share/ca-certificates/20532.pem within VM
functional_test.go:1941: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh "sudo cat /usr/share/ca-certificates/20532.pem"
functional_test.go:1940: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1941: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (4.20s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-20220329101744-2053 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1968: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh "sudo systemctl is-active crio": exit status 1 (666.520269ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2200: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2214: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 version -o=json --components
functional_test.go:2214: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220329101744-2053 version -o=json --components: (1.533909465s)
--- PASS: TestFunctional/parallel/Version/components (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 image ls --format short
functional_test.go:263: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220329101744-2053 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.5
k8s.gcr.io/kube-proxy:v1.23.5
k8s.gcr.io/kube-controller-manager:v1.23.5
k8s.gcr.io/kube-apiserver:v1.23.5
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220329101744-2053
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220329101744-2053
docker.io/kubernetesui/metrics-scraper:v1.0.7
docker.io/kubernetesui/dashboard:v2.3.1
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 image ls --format table
functional_test.go:263: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220329101744-2053 image ls --format table:
|---------------------------------------------|--------------------------------|---------------|--------|
|                    Image                    |              Tag               |   Image ID    |  Size  |
|---------------------------------------------|--------------------------------|---------------|--------|
| k8s.gcr.io/kube-scheduler                   | v1.23.5                        | 884d49d6d8c9f | 53.5MB |
| docker.io/kubernetesui/dashboard            | v2.3.1                         | e1482a24335a6 | 220MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                             | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-20220329101744-2053 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/pause                            | 3.1                            | da86e6ba6ca19 | 742kB  |
| docker.io/library/minikube-local-cache-test | functional-20220329101744-2053 | 720dca54a1473 | 30B    |
| docker.io/library/nginx                     | alpine                         | 53722defe6278 | 23.4MB |
| docker.io/library/nginx                     | latest                         | 12766a6745eea | 142MB  |
| docker.io/library/mysql                     | 5.7                            | 05311a87aeb4d | 450MB  |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                         | a4ca41631cc7a | 46.8MB |
| k8s.gcr.io/pause                            | latest                         | 350b164e7ae1d | 240kB  |
| k8s.gcr.io/kube-proxy                       | v1.23.5                        | 3c53fa8541f95 | 112MB  |
| k8s.gcr.io/kube-controller-manager          | v1.23.5                        | b0c9e5e4dbb14 | 125MB  |
| docker.io/kubernetesui/metrics-scraper      | v1.0.7                         | 7801cfc6d5c07 | 34.4MB |
| k8s.gcr.io/pause                            | 3.3                            | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                   | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/echoserver                       | 1.8                            | 82e4c8a736a4f | 95.4MB |
| k8s.gcr.io/kube-apiserver                   | v1.23.5                        | 3fc1d62d65872 | 135MB  |
| k8s.gcr.io/etcd                             | 3.5.1-0                        | 25f8c7f3da61c | 293MB  |
| k8s.gcr.io/pause                            | 3.6                            | 6270bb605e12e | 683kB  |
|---------------------------------------------|--------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:263: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220329101744-2053 image ls --format json:
[{"id":"720dca54a14738446dc1900ae89c68202fb316f1be56de4cc8d5911cdfcee47c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220329101744-2053"],"size":"30"},{"id":"12766a6745eea133de9fdcd03ff720fa971fdaf21113d4bc72b417c123b15619","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"3c53fa8541f95165d3def81704febb85e2e13f90872667f9939dd856dc88e874","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.5"],"size":"112000000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:v2.3.1"],"size":"220000000"},{"id":"53722defe627853c4f67a743b54246916074a824bc93bc7e05f452c6929374bf","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23400000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c71
1f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220329101744-2053"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"05311a87aeb4d7f98b2726c39d4d29d6a174d20953a6d1ceaa236bfa177f5fb6","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"450000000"},{"id":"b0c9e5e4dbb14459edc593b39add54f5497e42d4eecc8d03bee5daf9537b0dae","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controlle
r-manager:v1.23.5"],"size":"125000000"},{"id":"25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"293000000"},{"id":"7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:v1.0.7"],"size":"34400000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"3fc1d62d65872296462b198ab7842d0faf8c336b236c4a0dacfce67bec95257f","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.5"],"size":"135000000"},{"id":"884d49d6d8c9f40672d20c78e
300ffee238d01c1ccb2c132937125d97a596fd7","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.5"],"size":"53500000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:263: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220329101744-2053 image ls --format yaml:
- id: 3fc1d62d65872296462b198ab7842d0faf8c336b236c4a0dacfce67bec95257f
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.5
size: "135000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220329101744-2053
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 12766a6745eea133de9fdcd03ff720fa971fdaf21113d4bc72b417c123b15619
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:v1.0.7
size: "34400000"
- id: 05311a87aeb4d7f98b2726c39d4d29d6a174d20953a6d1ceaa236bfa177f5fb6
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "450000000"
- id: 3c53fa8541f95165d3def81704febb85e2e13f90872667f9939dd856dc88e874
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.5
size: "112000000"
- id: b0c9e5e4dbb14459edc593b39add54f5497e42d4eecc8d03bee5daf9537b0dae
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.5
size: "125000000"
- id: 25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "293000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:v2.3.1
size: "220000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 720dca54a14738446dc1900ae89c68202fb316f1be56de4cc8d5911cdfcee47c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220329101744-2053
size: "30"
- id: 53722defe627853c4f67a743b54246916074a824bc93bc7e05f452c6929374bf
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23400000"
- id: 884d49d6d8c9f40672d20c78e300ffee238d01c1ccb2c132937125d97a596fd7
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.5
size: "53500000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:305: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:305: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh pgrep buildkitd: exit status 1 (722.433841ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 image build -t localhost/my-image:functional-20220329101744-2053 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220329101744-2053 image build -t localhost/my-image:functional-20220329101744-2053 testdata/build: (2.243578767s)
functional_test.go:317: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220329101744-2053 image build -t localhost/my-image:functional-20220329101744-2053 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 653ea0961123
Removing intermediate container 653ea0961123
---> 8f65715b7f75
Step 3/3 : ADD content.txt /
---> ccee2d86b5ba
Successfully built ccee2d86b5ba
Successfully tagged localhost/my-image:functional-20220329101744-2053
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:339: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:339: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.791814254s)
functional_test.go:344: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220329101744-2053
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:496: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220329101744-2053 docker-env) && out/minikube-darwin-amd64 status -p functional-20220329101744-2053"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/bash
functional_test.go:496: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220329101744-2053 docker-env) && out/minikube-darwin-amd64 status -p functional-20220329101744-2053": (1.590292876s)
functional_test.go:519: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220329101744-2053 docker-env) && docker images"
functional_test.go:519: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220329101744-2053 docker-env) && docker images": (1.125747861s)
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220329101744-2053

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:352: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220329101744-2053 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220329101744-2053: (3.311387243s)
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.81s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2060: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 update-context --alsologtostderr -v=2
2022/03/29 10:22:13 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2060: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2060: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:362: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220329101744-2053

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:362: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220329101744-2053 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220329101744-2053: (2.456209497s)
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:232: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:237: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220329101744-2053
functional_test.go:242: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220329101744-2053

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:242: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220329101744-2053 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220329101744-2053: (4.288682297s)
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:377: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 image save gcr.io/google-containers/addon-resizer:functional-20220329101744-2053 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:377: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220329101744-2053 image save gcr.io/google-containers/addon-resizer:functional-20220329101744-2053 /Users/jenkins/workspace/addon-resizer-save.tar: (2.327403635s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:389: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 image rm gcr.io/google-containers/addon-resizer:functional-20220329101744-2053
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:406: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:406: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220329101744-2053 image load /Users/jenkins/workspace/addon-resizer-save.tar: (2.440181005s)
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:416: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220329101744-2053

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:421: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220329101744-2053
functional_test.go:421: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220329101744-2053 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220329101744-2053: (2.656800733s)
functional_test.go:426: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220329101744-2053
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.91s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1276: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1316: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1321: Took "677.453695ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1330: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1335: Took "75.629282ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1367: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1372: Took "696.917187ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1380: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1385: Took "99.32447ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:128: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-20220329101744-2053 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:148: (dbg) Run:  kubectl --context functional-20220329101744-2053 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:343: "nginx-svc" [b099af08-b4f3-49a1-9420-407be71dcaf8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:343: "nginx-svc" [b099af08-b4f3-49a1-9420-407be71dcaf8] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.016055306s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220329101744-2053 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (5.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:235: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (5.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:370: (dbg) stopping [out/minikube-darwin-amd64 -p functional-20220329101744-2053 tunnel --alsologtostderr] ...
helpers_test.go:501: unable to terminate pid 4722: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220329101744-2053 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest3339123509:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1648574518684787000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest3339123509/created-by-test
functional_test_mount_test.go:110: wrote "test-1648574518684787000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest3339123509/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1648574518684787000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest3339123509/test-1648574518684787000
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (666.519069ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh -- ls -la /mount-9p
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 29 17:21 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 29 17:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 29 17:21 test-1648574518684787000
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh cat /mount-9p/test-1648574518684787000

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20220329101744-2053 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:343: "busybox-mount" [62d40bd1-ddbe-49f4-a39a-904955c6f87a] Pending
helpers_test.go:343: "busybox-mount" [62d40bd1-ddbe-49f4-a39a-904955c6f87a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [62d40bd1-ddbe-49f4-a39a-904955c6f87a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.009649783s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20220329101744-2053 logs busybox-mount
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220329101744-2053 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest3339123509:/mount-9p --alsologtostderr -v=1] ...
E0329 10:22:07.338056    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.77s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (3.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220329101744-2053 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest2658659141:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (742.37675ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220329101744-2053 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest2658659141:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh "sudo umount -f /mount-9p": exit status 1 (658.942246ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-darwin-amd64 -p functional-20220329101744-2053 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220329101744-2053 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mounttest2658659141:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (3.75s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.27s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220329101744-2053
--- PASS: TestFunctional/delete_addon-resizer_images (0.27s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220329101744-2053
--- PASS: TestFunctional/delete_my-image_image (0.12s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220329101744-2053
--- PASS: TestFunctional/delete_minikube_cached_images (0.12s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (137.55s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:40: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220329102231-2053 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0329 10:24:23.471690    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
ingress_addon_legacy_test.go:40: (dbg) Done: out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220329102231-2053 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : (2m17.55307977s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (137.55s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.42s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220329102231-2053 addons enable ingress --alsologtostderr -v=5
E0329 10:24:51.182398    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
ingress_addon_legacy_test.go:71: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220329102231-2053 addons enable ingress --alsologtostderr -v=5: (14.422206386s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.42s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.61s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220329102231-2053 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.61s)

                                                
                                    
x
+
TestJSONOutput/start/Command (124.72s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-20220329102551-2053 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0329 10:26:07.714114    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
E0329 10:26:07.719949    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
E0329 10:26:07.730697    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
E0329 10:26:07.757795    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
E0329 10:26:07.807834    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
E0329 10:26:07.893129    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
E0329 10:26:08.058138    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
E0329 10:26:08.383540    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
E0329 10:26:09.027969    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
E0329 10:26:10.316768    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
E0329 10:26:12.878482    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
E0329 10:26:18.008038    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
E0329 10:26:28.250352    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
E0329 10:26:48.733767    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
E0329 10:27:29.694067    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-20220329102551-2053 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (2m4.719344803s)
--- PASS: TestJSONOutput/start/Command (124.72s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.92s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-20220329102551-2053 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.92s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-20220329102551-2053 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (17.92s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-20220329102551-2053 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-20220329102551-2053 --output=json --user=testUser: (17.916043154s)
--- PASS: TestJSONOutput/stop/Command (17.92s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.76s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-20220329102822-2053 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-20220329102822-2053 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (112.634796ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9666cc0f-119d-4a0a-b5f4-a24f34182add","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220329102822-2053] minikube v1.25.2 on Darwin 11.2.3","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"055d4a31-c8fb-4bf9-a46b-abce3eb9a809","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13730"}}
	{"specversion":"1.0","id":"72bc581f-0afe-4473-bb61-5acb79753de8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig"}}
	{"specversion":"1.0","id":"b6e7a3bb-51b1-49dc-9e30-9b779328664f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"068948f8-89a9-4fba-833c-3b0f1754fbd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d795cb4d-bc94-463b-aab7-4ab116106414","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube"}}
	{"specversion":"1.0","id":"d66fcebb-3e81-46ea-8054-bea3fa503caa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20220329102822-2053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-20220329102822-2053
--- PASS: TestErrorJSONOutput (0.76s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (88.18s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220329102823-2053 --network=
E0329 10:28:51.624034    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
E0329 10:29:23.478453    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
kic_custom_network_test.go:58: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220329102823-2053 --network=: (1m15.424270697s)
kic_custom_network_test.go:123: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20220329102823-2053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220329102823-2053
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220329102823-2053: (12.636259789s)
--- PASS: TestKicCustomNetwork/create_custom_network (88.18s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (76.04s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220329102951-2053 --network=bridge
E0329 10:30:04.318682    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
E0329 10:30:04.324394    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
E0329 10:30:04.339487    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
E0329 10:30:04.360031    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
E0329 10:30:04.409986    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
E0329 10:30:04.493621    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
E0329 10:30:04.659965    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
E0329 10:30:04.987057    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
E0329 10:30:05.636313    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
E0329 10:30:06.916754    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
E0329 10:30:09.484483    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
E0329 10:30:14.608606    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
E0329 10:30:24.895437    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
E0329 10:30:45.384403    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
kic_custom_network_test.go:58: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220329102951-2053 --network=bridge: (1m6.275756375s)
kic_custom_network_test.go:123: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20220329102951-2053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220329102951-2053
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220329102951-2053: (9.644982749s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (76.04s)

                                                
                                    
x
+
TestKicExistingNetwork (89.05s)

                                                
                                                
=== RUN   TestKicExistingNetwork
E0329 10:31:07.673959    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
kic_custom_network_test.go:123: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:94: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-20220329103112-2053 --network=existing-network
E0329 10:31:26.308295    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
E0329 10:31:35.432953    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
kic_custom_network_test.go:94: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-20220329103112-2053 --network=existing-network: (1m10.934912273s)
helpers_test.go:176: Cleaning up "existing-network-20220329103112-2053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-20220329103112-2053
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-20220329103112-2053: (12.979637263s)
--- PASS: TestKicExistingNetwork (89.05s)

                                                
                                    
x
+
TestKicCustomSubnet (87.12s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:113: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-20220329103236-2053 --subnet=192.168.60.0/24
E0329 10:32:48.226141    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
kic_custom_network_test.go:113: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-20220329103236-2053 --subnet=192.168.60.0/24: (1m13.84022843s)
kic_custom_network_test.go:134: (dbg) Run:  docker network inspect custom-subnet-20220329103236-2053 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-20220329103236-2053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-20220329103236-2053
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-20220329103236-2053: (13.160160998s)
--- PASS: TestKicCustomSubnet (87.12s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (47.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-20220329103403-2053 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
E0329 10:34:23.430234    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
mount_start_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-20220329103403-2053 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (46.149275856s)
--- PASS: TestMountStart/serial/StartWithMountFirst (47.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.61s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-20220329103403-2053 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.61s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (47.81s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220329103403-2053 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
E0329 10:35:04.276897    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
E0329 10:35:32.067981    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
mount_start_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220329103403-2053 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (46.805198996s)
--- PASS: TestMountStart/serial/StartWithMountSecond (47.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.61s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220329103403-2053 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.61s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (11.86s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-20220329103403-2053 --alsologtostderr -v=5
E0329 10:35:46.504511    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
pause_test.go:133: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-20220329103403-2053 --alsologtostderr -v=5: (11.863392002s)
--- PASS: TestMountStart/serial/DeleteFirst (11.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.6s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220329103403-2053 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.60s)

                                                
                                    
x
+
TestMountStart/serial/Stop (7.13s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-20220329103403-2053
mount_start_test.go:156: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-20220329103403-2053: (7.128148817s)
--- PASS: TestMountStart/serial/Stop (7.13s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (29.47s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:167: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220329103403-2053
E0329 10:36:07.662664    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
mount_start_test.go:167: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220329103403-2053: (28.462544988s)
--- PASS: TestMountStart/serial/RestartStopped (29.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.6s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220329103403-2053 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.60s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (231.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220329103641-2053 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0329 10:39:23.426003    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
E0329 10:40:04.277701    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220329103641-2053 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (3m50.477097419s)
multinode_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 status --alsologtostderr
multinode_test.go:92: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220329103641-2053 status --alsologtostderr: (1.079004004s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (231.56s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220329103641-2053 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220329103641-2053 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (1.971135338s)
multinode_test.go:491: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220329103641-2053 -- rollout status deployment/busybox
multinode_test.go:491: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220329103641-2053 -- rollout status deployment/busybox: (2.534265569s)
multinode_test.go:497: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220329103641-2053 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:509: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220329103641-2053 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:517: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220329103641-2053 -- exec busybox-7978565885-6pwck -- nslookup kubernetes.io
multinode_test.go:517: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220329103641-2053 -- exec busybox-7978565885-q6hmb -- nslookup kubernetes.io
multinode_test.go:527: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220329103641-2053 -- exec busybox-7978565885-6pwck -- nslookup kubernetes.default
multinode_test.go:527: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220329103641-2053 -- exec busybox-7978565885-q6hmb -- nslookup kubernetes.default
multinode_test.go:535: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220329103641-2053 -- exec busybox-7978565885-6pwck -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:535: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220329103641-2053 -- exec busybox-7978565885-q6hmb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.92s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:545: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220329103641-2053 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:553: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220329103641-2053 -- exec busybox-7978565885-6pwck -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:561: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220329103641-2053 -- exec busybox-7978565885-6pwck -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:553: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220329103641-2053 -- exec busybox-7978565885-q6hmb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:561: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220329103641-2053 -- exec busybox-7978565885-q6hmb -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (109.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220329103641-2053 -v 3 --alsologtostderr
E0329 10:41:07.659487    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-20220329103641-2053 -v 3 --alsologtostderr: (1m48.098969354s)
multinode_test.go:117: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 status --alsologtostderr
multinode_test.go:117: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220329103641-2053 status --alsologtostderr: (1.562251564s)
--- PASS: TestMultiNode/serial/AddNode (109.66s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (22.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 status --output json --alsologtostderr
E0329 10:42:30.785542    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
multinode_test.go:174: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220329103641-2053 status --output json --alsologtostderr: (1.527702265s)
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 cp testdata/cp-test.txt multinode-20220329103641-2053:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 ssh -n multinode-20220329103641-2053 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 cp multinode-20220329103641-2053:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mk_cp_test258145225/cp-test_multinode-20220329103641-2053.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 ssh -n multinode-20220329103641-2053 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 cp multinode-20220329103641-2053:/home/docker/cp-test.txt multinode-20220329103641-2053-m02:/home/docker/cp-test_multinode-20220329103641-2053_multinode-20220329103641-2053-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 ssh -n multinode-20220329103641-2053 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 ssh -n multinode-20220329103641-2053-m02 "sudo cat /home/docker/cp-test_multinode-20220329103641-2053_multinode-20220329103641-2053-m02.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 cp multinode-20220329103641-2053:/home/docker/cp-test.txt multinode-20220329103641-2053-m03:/home/docker/cp-test_multinode-20220329103641-2053_multinode-20220329103641-2053-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 ssh -n multinode-20220329103641-2053 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 ssh -n multinode-20220329103641-2053-m03 "sudo cat /home/docker/cp-test_multinode-20220329103641-2053_multinode-20220329103641-2053-m03.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 cp testdata/cp-test.txt multinode-20220329103641-2053-m02:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 ssh -n multinode-20220329103641-2053-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 cp multinode-20220329103641-2053-m02:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mk_cp_test258145225/cp-test_multinode-20220329103641-2053-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 ssh -n multinode-20220329103641-2053-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 cp multinode-20220329103641-2053-m02:/home/docker/cp-test.txt multinode-20220329103641-2053:/home/docker/cp-test_multinode-20220329103641-2053-m02_multinode-20220329103641-2053.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 ssh -n multinode-20220329103641-2053-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 ssh -n multinode-20220329103641-2053 "sudo cat /home/docker/cp-test_multinode-20220329103641-2053-m02_multinode-20220329103641-2053.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 cp multinode-20220329103641-2053-m02:/home/docker/cp-test.txt multinode-20220329103641-2053-m03:/home/docker/cp-test_multinode-20220329103641-2053-m02_multinode-20220329103641-2053-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 ssh -n multinode-20220329103641-2053-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 ssh -n multinode-20220329103641-2053-m03 "sudo cat /home/docker/cp-test_multinode-20220329103641-2053-m02_multinode-20220329103641-2053-m03.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 cp testdata/cp-test.txt multinode-20220329103641-2053-m03:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 ssh -n multinode-20220329103641-2053-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 cp multinode-20220329103641-2053-m03:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/mk_cp_test258145225/cp-test_multinode-20220329103641-2053-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 ssh -n multinode-20220329103641-2053-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 cp multinode-20220329103641-2053-m03:/home/docker/cp-test.txt multinode-20220329103641-2053:/home/docker/cp-test_multinode-20220329103641-2053-m03_multinode-20220329103641-2053.txt
helpers_test.go:555: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220329103641-2053 cp multinode-20220329103641-2053-m03:/home/docker/cp-test.txt multinode-20220329103641-2053:/home/docker/cp-test_multinode-20220329103641-2053-m03_multinode-20220329103641-2053.txt: (1.000442634s)
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 ssh -n multinode-20220329103641-2053-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 ssh -n multinode-20220329103641-2053 "sudo cat /home/docker/cp-test_multinode-20220329103641-2053-m03_multinode-20220329103641-2053.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 cp multinode-20220329103641-2053-m03:/home/docker/cp-test.txt multinode-20220329103641-2053-m02:/home/docker/cp-test_multinode-20220329103641-2053-m03_multinode-20220329103641-2053-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 ssh -n multinode-20220329103641-2053-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 ssh -n multinode-20220329103641-2053-m02 "sudo cat /home/docker/cp-test_multinode-20220329103641-2053-m03_multinode-20220329103641-2053-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (22.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (11.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:215: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 node stop m03
multinode_test.go:215: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220329103641-2053 node stop m03: (9.024467019s)
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 status
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220329103641-2053 status: exit status 7 (1.219620241s)

                                                
                                                
-- stdout --
	multinode-20220329103641-2053
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220329103641-2053-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220329103641-2053-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 status --alsologtostderr
multinode_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220329103641-2053 status --alsologtostderr: exit status 7 (1.229395282s)

                                                
                                                
-- stdout --
	multinode-20220329103641-2053
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220329103641-2053-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220329103641-2053-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0329 10:43:03.608629    9102 out.go:297] Setting OutFile to fd 1 ...
	I0329 10:43:03.608767    9102 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 10:43:03.608771    9102 out.go:310] Setting ErrFile to fd 2...
	I0329 10:43:03.608775    9102 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 10:43:03.608846    9102 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/bin
	I0329 10:43:03.609009    9102 out.go:304] Setting JSON to false
	I0329 10:43:03.609025    9102 mustload.go:65] Loading cluster: multinode-20220329103641-2053
	I0329 10:43:03.609279    9102 config.go:176] Loaded profile config "multinode-20220329103641-2053": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 10:43:03.609292    9102 status.go:253] checking status of multinode-20220329103641-2053 ...
	I0329 10:43:03.609648    9102 cli_runner.go:133] Run: docker container inspect multinode-20220329103641-2053 --format={{.State.Status}}
	I0329 10:43:03.726462    9102 status.go:328] multinode-20220329103641-2053 host status = "Running" (err=<nil>)
	I0329 10:43:03.726499    9102 host.go:66] Checking if "multinode-20220329103641-2053" exists ...
	I0329 10:43:03.726839    9102 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220329103641-2053
	I0329 10:43:03.843959    9102 host.go:66] Checking if "multinode-20220329103641-2053" exists ...
	I0329 10:43:03.844246    9102 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0329 10:43:03.844314    9102 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329103641-2053
	I0329 10:43:03.966327    9102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60366 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329103641-2053/id_rsa Username:docker}
	I0329 10:43:04.052684    9102 ssh_runner.go:195] Run: systemctl --version
	I0329 10:43:04.057188    9102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0329 10:43:04.065990    9102 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220329103641-2053
	I0329 10:43:04.183403    9102 kubeconfig.go:92] found "multinode-20220329103641-2053" server: "https://127.0.0.1:60372"
	I0329 10:43:04.183428    9102 api_server.go:165] Checking apiserver status ...
	I0329 10:43:04.183467    9102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0329 10:43:04.193263    9102 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1752/cgroup
	I0329 10:43:04.200641    9102 api_server.go:181] apiserver freezer: "7:freezer:/docker/67259e9739666577a18999f7ae1f2eb5944dda9c66a4e111cb62e97a963eab7d/kubepods/burstable/pod663a6a073e54d6dbdead8f11ff57bfe6/ebba0153c2ca02a0a97a15f41561c4377c4188c6e078c66bea35e00ebd0662f6"
	I0329 10:43:04.200710    9102 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/67259e9739666577a18999f7ae1f2eb5944dda9c66a4e111cb62e97a963eab7d/kubepods/burstable/pod663a6a073e54d6dbdead8f11ff57bfe6/ebba0153c2ca02a0a97a15f41561c4377c4188c6e078c66bea35e00ebd0662f6/freezer.state
	I0329 10:43:04.207570    9102 api_server.go:203] freezer state: "THAWED"
	I0329 10:43:04.207599    9102 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60372/healthz ...
	I0329 10:43:04.213609    9102 api_server.go:266] https://127.0.0.1:60372/healthz returned 200:
	ok
	I0329 10:43:04.213621    9102 status.go:419] multinode-20220329103641-2053 apiserver status = Running (err=<nil>)
	I0329 10:43:04.213628    9102 status.go:255] multinode-20220329103641-2053 status: &{Name:multinode-20220329103641-2053 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0329 10:43:04.213643    9102 status.go:253] checking status of multinode-20220329103641-2053-m02 ...
	I0329 10:43:04.213932    9102 cli_runner.go:133] Run: docker container inspect multinode-20220329103641-2053-m02 --format={{.State.Status}}
	I0329 10:43:04.330986    9102 status.go:328] multinode-20220329103641-2053-m02 host status = "Running" (err=<nil>)
	I0329 10:43:04.331013    9102 host.go:66] Checking if "multinode-20220329103641-2053-m02" exists ...
	I0329 10:43:04.331326    9102 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220329103641-2053-m02
	I0329 10:43:04.449007    9102 host.go:66] Checking if "multinode-20220329103641-2053-m02" exists ...
	I0329 10:43:04.450245    9102 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0329 10:43:04.450308    9102 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220329103641-2053-m02
	I0329 10:43:04.579543    9102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60709 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/machines/multinode-20220329103641-2053-m02/id_rsa Username:docker}
	I0329 10:43:04.667075    9102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0329 10:43:04.676336    9102 status.go:255] multinode-20220329103641-2053-m02 status: &{Name:multinode-20220329103641-2053-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0329 10:43:04.676354    9102 status.go:253] checking status of multinode-20220329103641-2053-m03 ...
	I0329 10:43:04.676625    9102 cli_runner.go:133] Run: docker container inspect multinode-20220329103641-2053-m03 --format={{.State.Status}}
	I0329 10:43:04.794687    9102 status.go:328] multinode-20220329103641-2053-m03 host status = "Stopped" (err=<nil>)
	I0329 10:43:04.794707    9102 status.go:341] host is not running, skipping remaining checks
	I0329 10:43:04.794716    9102 status.go:255] multinode-20220329103641-2053-m03 status: &{Name:multinode-20220329103641-2053-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (11.47s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (50.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:249: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 node start m03 --alsologtostderr
multinode_test.go:259: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220329103641-2053 node start m03 --alsologtostderr: (48.98057694s)
multinode_test.go:266: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 status
multinode_test.go:266: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220329103641-2053 status: (1.599967231s)
multinode_test.go:280: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (50.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (249.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220329103641-2053
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-20220329103641-2053
E0329 10:44:23.424774    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-20220329103641-2053: (43.0536215s)
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220329103641-2053 --wait=true -v=8 --alsologtostderr
E0329 10:45:04.274522    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
E0329 10:46:07.651843    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
E0329 10:46:27.425558    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
multinode_test.go:300: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220329103641-2053 --wait=true -v=8 --alsologtostderr: (3m26.754098371s)
multinode_test.go:305: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220329103641-2053
--- PASS: TestMultiNode/serial/RestartKeepsNodes (249.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (17.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:399: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 node delete m03
multinode_test.go:399: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220329103641-2053 node delete m03: (14.073139836s)
multinode_test.go:405: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 status --alsologtostderr
multinode_test.go:405: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220329103641-2053 status --alsologtostderr: (1.154014065s)
multinode_test.go:419: (dbg) Run:  docker volume ls
multinode_test.go:429: (dbg) Run:  kubectl get nodes
multinode_test.go:429: (dbg) Done: kubectl get nodes: (1.743065881s)
multinode_test.go:437: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (17.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (35.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:319: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 stop
multinode_test.go:319: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220329103641-2053 stop: (34.809711016s)
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 status
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220329103641-2053 status: exit status 7 (272.636659ms)

                                                
                                                
-- stdout --
	multinode-20220329103641-2053
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220329103641-2053-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:332: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 status --alsologtostderr
multinode_test.go:332: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220329103641-2053 status --alsologtostderr: exit status 7 (278.706324ms)

                                                
                                                
-- stdout --
	multinode-20220329103641-2053
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220329103641-2053-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0329 10:48:57.689901    9955 out.go:297] Setting OutFile to fd 1 ...
	I0329 10:48:57.690063    9955 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 10:48:57.690068    9955 out.go:310] Setting ErrFile to fd 2...
	I0329 10:48:57.690071    9955 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0329 10:48:57.690150    9955 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/bin
	I0329 10:48:57.690318    9955 out.go:304] Setting JSON to false
	I0329 10:48:57.690333    9955 mustload.go:65] Loading cluster: multinode-20220329103641-2053
	I0329 10:48:57.691212    9955 config.go:176] Loaded profile config "multinode-20220329103641-2053": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.5
	I0329 10:48:57.691228    9955 status.go:253] checking status of multinode-20220329103641-2053 ...
	I0329 10:48:57.691615    9955 cli_runner.go:133] Run: docker container inspect multinode-20220329103641-2053 --format={{.State.Status}}
	I0329 10:48:57.807321    9955 status.go:328] multinode-20220329103641-2053 host status = "Stopped" (err=<nil>)
	I0329 10:48:57.807342    9955 status.go:341] host is not running, skipping remaining checks
	I0329 10:48:57.807350    9955 status.go:255] multinode-20220329103641-2053 status: &{Name:multinode-20220329103641-2053 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0329 10:48:57.807377    9955 status.go:253] checking status of multinode-20220329103641-2053-m02 ...
	I0329 10:48:57.807704    9955 cli_runner.go:133] Run: docker container inspect multinode-20220329103641-2053-m02 --format={{.State.Status}}
	I0329 10:48:57.927210    9955 status.go:328] multinode-20220329103641-2053-m02 host status = "Stopped" (err=<nil>)
	I0329 10:48:57.927229    9955 status.go:341] host is not running, skipping remaining checks
	I0329 10:48:57.927234    9955 status.go:255] multinode-20220329103641-2053-m02 status: &{Name:multinode-20220329103641-2053-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (35.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (138.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:349: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:359: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220329103641-2053 --wait=true -v=8 --alsologtostderr --driver=docker 
E0329 10:49:23.422226    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
E0329 10:50:04.270091    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
E0329 10:51:07.650314    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
multinode_test.go:359: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220329103641-2053 --wait=true -v=8 --alsologtostderr --driver=docker : (2m15.276708061s)
multinode_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220329103641-2053 status --alsologtostderr
multinode_test.go:365: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220329103641-2053 status --alsologtostderr: (1.123041293s)
multinode_test.go:379: (dbg) Run:  kubectl get nodes
multinode_test.go:379: (dbg) Done: kubectl get nodes: (1.777096398s)
multinode_test.go:387: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (138.33s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (100.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:448: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220329103641-2053
multinode_test.go:457: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220329103641-2053-m02 --driver=docker 
multinode_test.go:457: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20220329103641-2053-m02 --driver=docker : exit status 14 (312.604705ms)

                                                
                                                
-- stdout --
	* [multinode-20220329103641-2053-m02] minikube v1.25.2 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13730
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220329103641-2053-m02' is duplicated with machine name 'multinode-20220329103641-2053-m02' in profile 'multinode-20220329103641-2053'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220329103641-2053-m03 --driver=docker 
E0329 10:52:26.497454    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
multinode_test.go:465: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220329103641-2053-m03 --driver=docker : (1m24.718478363s)
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220329103641-2053
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-20220329103641-2053: exit status 80 (605.47444ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220329103641-2053
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220329103641-2053-m03 already exists in multinode-20220329103641-2053-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:477: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-20220329103641-2053-m03
multinode_test.go:477: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-20220329103641-2053-m03: (14.972083276s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (100.65s)

                                                
                                    
x
+
TestPreload (206.77s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20220329105319-2053 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0
E0329 10:54:23.415022    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
E0329 10:55:04.260973    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
preload_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-20220329105319-2053 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0: (2m28.171282068s)
preload_test.go:62: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-20220329105319-2053 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:62: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-20220329105319-2053 -- docker pull gcr.io/k8s-minikube/busybox: (1.78080538s)
preload_test.go:72: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20220329105319-2053 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.17.3
E0329 10:56:07.653213    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
preload_test.go:72: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-20220329105319-2053 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.17.3: (43.128005078s)
preload_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-20220329105319-2053 -- docker images
helpers_test.go:176: Cleaning up "test-preload-20220329105319-2053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-20220329105319-2053
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-20220329105319-2053: (12.989250232s)
--- PASS: TestPreload (206.77s)

                                                
                                    
x
+
TestScheduledStopUnix (153.77s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-20220329105646-2053 --memory=2048 --driver=docker 
scheduled_stop_test.go:129: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-20220329105646-2053 --memory=2048 --driver=docker : (1m14.934042441s)
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220329105646-2053 --schedule 5m
scheduled_stop_test.go:192: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220329105646-2053 -n scheduled-stop-20220329105646-2053
scheduled_stop_test.go:170: signal error was:  <nil>
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220329105646-2053 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220329105646-2053 --cancel-scheduled
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220329105646-2053 -n scheduled-stop-20220329105646-2053
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220329105646-2053
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220329105646-2053 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
E0329 10:59:10.777021    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220329105646-2053
scheduled_stop_test.go:206: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-20220329105646-2053: exit status 7 (156.158477ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220329105646-2053
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220329105646-2053 -n scheduled-stop-20220329105646-2053
scheduled_stop_test.go:177: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220329105646-2053 -n scheduled-stop-20220329105646-2053: exit status 7 (160.59849ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-20220329105646-2053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-20220329105646-2053
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-20220329105646-2053: (6.193493462s)
--- PASS: TestScheduledStopUnix (153.77s)

                                                
                                    
x
+
TestSkaffold (120.39s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:57: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3910613953 version
skaffold_test.go:61: skaffold version: v1.37.0
skaffold_test.go:64: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-20220329105920-2053 --memory=2600 --driver=docker 
E0329 10:59:23.411142    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
E0329 11:00:04.259202    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
skaffold_test.go:64: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-20220329105920-2053 --memory=2600 --driver=docker : (1m15.414797854s)
skaffold_test.go:84: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:108: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3910613953 run --minikube-profile skaffold-20220329105920-2053 --kube-context skaffold-20220329105920-2053 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:108: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3910613953 run --minikube-profile skaffold-20220329105920-2053 --kube-context skaffold-20220329105920-2053 --status-check=true --port-forward=false --interactive=false: (19.976719327s)
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:343: "leeroy-app-65854c5768-9l4sx" [5acd026e-3435-43ec-b06e-0289f7cc6110] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-app healthy within 5.0196521s
skaffold_test.go:117: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:343: "leeroy-web-c9c68956b-n7n9f" [3b73a3b5-090b-456a-91c5-79b5cc33b17a] Running
skaffold_test.go:117: (dbg) TestSkaffold: app=leeroy-web healthy within 5.016371146s
helpers_test.go:176: Cleaning up "skaffold-20220329105920-2053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-20220329105920-2053
E0329 11:01:07.653350    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-20220329105920-2053: (13.532588643s)
--- PASS: TestSkaffold (120.39s)

                                                
                                    
x
+
TestInsufficientStorage (64.65s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-20220329110120-2053 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-20220329110120-2053 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (50.446282251s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b6d8b66e-145f-4f26-b10e-e87b739cbb48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220329110120-2053] minikube v1.25.2 on Darwin 11.2.3","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"adb39a89-83a5-434b-afd6-69743e0a0b85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13730"}}
	{"specversion":"1.0","id":"d2de3bc6-b0ff-4677-8a60-b63b06069fb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig"}}
	{"specversion":"1.0","id":"2e72071f-0046-4a6c-9611-d2fc26a6132f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"2049dc2e-0f4b-4acd-aeac-02b18ceafc4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"085a4e84-de5c-45cf-986a-520d55743e05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube"}}
	{"specversion":"1.0","id":"a6f28daa-f7a2-4ad3-9929-353fde7c87a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"4c34bbb3-9b5d-4ae9-948c-1b3b728bd433","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"42dabd38-d2f7-49f3-a50d-6724d093c9f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e00868f6-eafd-491a-be6b-52abd06e7016","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220329110120-2053 in cluster insufficient-storage-20220329110120-2053","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b230dd23-26db-4e9b-b8e0-fa3237e652de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"23ef98a8-81e6-4c3f-a547-ee873f57974f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"e1d0922c-ce57-44d2-9843-be938ed66bac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220329110120-2053 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220329110120-2053 --output=json --layout=cluster: exit status 7 (603.115197ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220329110120-2053","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220329110120-2053","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0329 11:02:11.932698   12017 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220329110120-2053" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig

                                                
                                                
** /stderr **
status_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220329110120-2053 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220329110120-2053 --output=json --layout=cluster: exit status 7 (606.42941ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220329110120-2053","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220329110120-2053","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0329 11:02:12.539947   12034 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220329110120-2053" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	E0329 11:02:12.548284   12034 status.go:557] unable to read event log: stat: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/insufficient-storage-20220329110120-2053/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20220329110120-2053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-20220329110120-2053
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-20220329110120-2053: (12.990388721s)
--- PASS: TestInsufficientStorage (64.65s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (206.86s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2536342713.exe start -p running-upgrade-20220329110642-2053 --memory=2200 --vm-driver=docker 
E0329 11:07:19.309260    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
version_upgrade_test.go:127: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2536342713.exe start -p running-upgrade-20220329110642-2053 --memory=2200 --vm-driver=docker : (1m37.077583243s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-20220329110642-2053 --memory=2200 --alsologtostderr -v=1 --driver=docker 
E0329 11:08:41.234426    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
E0329 11:09:06.509583    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
E0329 11:09:23.418553    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-20220329110642-2053 --memory=2200 --alsologtostderr -v=1 --driver=docker : (1m39.169863332s)
helpers_test.go:176: Cleaning up "running-upgrade-20220329110642-2053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-20220329110642-2053
E0329 11:10:04.268646    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-20220329110642-2053: (9.760221603s)
--- PASS: TestRunningBinaryUpgrade (206.86s)

                                                
                                    
x
+
TestKubernetesUpgrade (158.87s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220329111009-2053 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0329 11:10:57.322980    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
E0329 11:11:07.659574    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
version_upgrade_test.go:229: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220329111009-2053 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : (1m11.540200306s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220329111009-2053
E0329 11:11:25.084712    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220329111009-2053: (17.216996179s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20220329111009-2053 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-20220329111009-2053 status --format={{.Host}}: exit status 7 (179.728556ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220329111009-2053 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:250: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220329111009-2053 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker : (49.285553312s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220329111009-2053 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220329111009-2053 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220329111009-2053 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (334.536466ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220329111009-2053] minikube v1.25.2 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13730
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.6-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220329111009-2053
	    minikube start -p kubernetes-upgrade-20220329111009-2053 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220329111009-20532 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.6-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220329111009-2053 --kubernetes-version=v1.23.6-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220329111009-2053 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:282: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220329111009-2053 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker : (7.604092432s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20220329111009-2053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220329111009-2053

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220329111009-2053: (12.605546145s)
--- PASS: TestKubernetesUpgrade (158.87s)

                                                
                                    
x
+
TestMissingContainerUpgrade (189.34s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.462201732.exe start -p missing-upgrade-20220329110953-2053 --memory=2200 --driver=docker 

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.462201732.exe start -p missing-upgrade-20220329110953-2053 --memory=2200 --driver=docker : (1m36.428523607s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220329110953-2053
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220329110953-2053: (6.006048589s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220329110953-2053
version_upgrade_test.go:336: (dbg) Run:  out/minikube-darwin-amd64 start -p missing-upgrade-20220329110953-2053 --memory=2200 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-darwin-amd64 start -p missing-upgrade-20220329110953-2053 --memory=2200 --alsologtostderr -v=1 --driver=docker : (1m11.050942964s)
helpers_test.go:176: Cleaning up "missing-upgrade-20220329110953-2053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-20220329110953-2053

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-20220329110953-2053: (14.740622106s)
--- PASS: TestMissingContainerUpgrade (189.34s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.47s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.25.2 on darwin
- MINIKUBE_LOCATION=13730
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.11.0-to-current119132690
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.11.0-to-current119132690/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.11.0-to-current119132690/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.11.0-to-current119132690/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.47s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.34s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.25.2 on darwin
- MINIKUBE_LOCATION=13730
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.2.0-to-current3766574773
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.2.0-to-current3766574773/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.2.0-to-current3766574773/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/upgrade-v1.2.0-to-current3766574773/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (151.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.714928225.exe start -p stopped-upgrade-20220329111247-2053 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.714928225.exe start -p stopped-upgrade-20220329111247-2053 --memory=2200 --vm-driver=docker : (1m28.633692099s)
version_upgrade_test.go:199: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.714928225.exe -p stopped-upgrade-20220329111247-2053 stop
version_upgrade_test.go:199: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.714928225.exe -p stopped-upgrade-20220329111247-2053 stop: (4.919833013s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-20220329111247-2053 --memory=2200 --alsologtostderr -v=1 --driver=docker 
E0329 11:14:23.420049    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-20220329111247-2053 --memory=2200 --alsologtostderr -v=1 --driver=docker : (58.141466929s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (151.70s)

                                                
                                    
x
+
TestPause/serial/Start (104.35s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220329111302-2053 --memory=2048 --install-addons=false --wait=all --driver=docker 

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220329111302-2053 --memory=2048 --install-addons=false --wait=all --driver=docker : (1m44.350945016s)
--- PASS: TestPause/serial/Start (104.35s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.1s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220329111302-2053 --alsologtostderr -v=1 --driver=docker 
pause_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220329111302-2053 --alsologtostderr -v=1 --driver=docker : (7.08735169s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.10s)

                                                
                                    
x
+
TestPause/serial/Pause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20220329111302-2053 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.86s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.64s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-20220329111302-2053 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-20220329111302-2053 --output=json --layout=cluster: exit status 2 (640.096204ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20220329111302-2053","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220329111302-2053","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.64s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.84s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-20220329111302-2053 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.84s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.81s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20220329111302-2053 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.81s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (9.57s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-20220329111302-2053 --alsologtostderr -v=5
E0329 11:15:04.268461    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
pause_test.go:133: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-20220329111302-2053 --alsologtostderr -v=5: (9.567754516s)
--- PASS: TestPause/serial/DeletePaused (9.57s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:169: (dbg) Run:  docker ps -a
pause_test.go:174: (dbg) Run:  docker volume inspect pause-20220329111302-2053
pause_test.go:174: (dbg) Non-zero exit: docker volume inspect pause-20220329111302-2053: exit status 1 (114.97065ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220329111302-2053

                                                
                                                
** /stderr **
pause_test.go:179: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (1.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:84: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220329111508-2053 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:84: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20220329111508-2053 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (334.352418ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220329111508-2053] minikube v1.25.2 on Darwin 11.2.3
	  - MINIKUBE_LOCATION=13730
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (54.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220329111508-2053 --driver=docker 

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220329111508-2053 --driver=docker : (53.494266345s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220329111508-2053 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (54.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-20220329111247-2053
version_upgrade_test.go:213: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-20220329111247-2053: (2.749478376s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (104.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-20220329110225-2053 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 
E0329 11:15:50.783669    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
E0329 11:15:57.312412    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p auto-20220329110225-2053 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : (1m44.607147568s)
--- PASS: TestNetworkPlugins/group/auto/Start (104.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (26.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:113: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220329111508-2053 --no-kubernetes --driver=docker 
E0329 11:16:07.638849    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
no_kubernetes_test.go:113: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220329111508-2053 --no-kubernetes --driver=docker : (14.410007123s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220329111508-2053 status -o json
no_kubernetes_test.go:201: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-20220329111508-2053 status -o json: exit status 2 (638.706453ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220329111508-2053","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:125: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-20220329111508-2053
no_kubernetes_test.go:125: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-20220329111508-2053: (11.363011647s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (26.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (37.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220329111508-2053 --no-kubernetes --driver=docker 
no_kubernetes_test.go:137: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220329111508-2053 --no-kubernetes --driver=docker : (37.987856997s)
--- PASS: TestNoKubernetes/serial/Start (37.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:148: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220329111508-2053 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220329111508-2053 "sudo systemctl is-active --quiet service kubelet": exit status 1 (606.887318ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:170: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:170: (dbg) Done: out/minikube-darwin-amd64 profile list: (1.09060572s)
no_kubernetes_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
no_kubernetes_test.go:180: (dbg) Done: out/minikube-darwin-amd64 profile list --output=json: (1.106235357s)
--- PASS: TestNoKubernetes/serial/ProfileList (2.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (4.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-20220329111508-2053

                                                
                                                
=== CONT  TestNoKubernetes/serial/Stop
no_kubernetes_test.go:159: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-20220329111508-2053: (4.923806901s)
--- PASS: TestNoKubernetes/serial/Stop (4.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-20220329110225-2053 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (20.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:192: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220329111508-2053 --driver=docker 

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:192: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220329111508-2053 --driver=docker : (20.468204211s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (20.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context auto-20220329110225-2053 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context auto-20220329110225-2053 replace --force -f testdata/netcat-deployment.yaml: (1.971880237s)
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-jlv84" [5ce46594-b78a-49bf-877b-91b3a44f664c] Pending
helpers_test.go:343: "netcat-668db85669-jlv84" [5ce46594-b78a-49bf-877b-91b3a44f664c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-jlv84" [5ce46594-b78a-49bf-877b-91b3a44f664c] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.008708265s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Run:  kubectl --context auto-20220329110225-2053 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:182: (dbg) Run:  kubectl --context auto-20220329110225-2053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:232: (dbg) Run:  kubectl --context auto-20220329110225-2053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:232: (dbg) Non-zero exit: kubectl --context auto-20220329110225-2053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.13290934s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:148: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220329111508-2053 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220329111508-2053 "sudo systemctl is-active --quiet service kubelet": exit status 1 (595.030674ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (111.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-20220329110226-2053 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-20220329110226-2053 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : (1m51.661327015s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (111.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (114.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p false-20220329110226-2053 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 
E0329 11:19:23.388547    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p false-20220329110226-2053 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : (1m54.077589316s)
--- PASS: TestNetworkPlugins/group/false/Start (114.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:343: "kindnet-5pmkf" [96d85c6d-7055-4fef-80af-194ffa393985] Running
net_test.go:107: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.017777007s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-20220329110226-2053 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context kindnet-20220329110226-2053 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:132: (dbg) Done: kubectl --context kindnet-20220329110226-2053 replace --force -f testdata/netcat-deployment.yaml: (2.491074396s)
net_test.go:146: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-vbdsp" [cecd8263-ddf4-4391-81e5-9eb7d1477462] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0329 11:19:47.406562    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:343: "netcat-668db85669-vbdsp" [cecd8263-ddf4-4391-81e5-9eb7d1477462] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.014797612s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (1.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-20220329110226-2053 "pgrep -a kubelet"
net_test.go:120: (dbg) Done: out/minikube-darwin-amd64 ssh -p false-20220329110226-2053 "pgrep -a kubelet": (1.032929253s)
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (1.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context false-20220329110226-2053 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
net_test.go:132: (dbg) Done: kubectl --context false-20220329110226-2053 replace --force -f testdata/netcat-deployment.yaml: (2.494229356s)
net_test.go:146: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-bmfsc" [c551f9fe-ff67-4893-9fd4-715ce1206afa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
helpers_test.go:343: "netcat-668db85669-bmfsc" [c551f9fe-ff67-4893-9fd4-715ce1206afa] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.009447761s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20220329110226-2053 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:182: (dbg) Run:  kubectl --context kindnet-20220329110226-2053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:232: (dbg) Run:  kubectl --context kindnet-20220329110226-2053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Run:  kubectl --context false-20220329110226-2053 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:182: (dbg) Run:  kubectl --context false-20220329110226-2053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:232: (dbg) Run:  kubectl --context false-20220329110226-2053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:232: (dbg) Non-zero exit: kubectl --context false-20220329110226-2053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.140587018s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (106.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-20220329110225-2053 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-20220329110225-2053 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : (1m46.885470345s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (106.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (112.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-20220329110225-2053 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 
E0329 11:20:57.292923    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
E0329 11:21:07.628680    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-20220329110225-2053 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : (1m52.681261969s)
--- PASS: TestNetworkPlugins/group/bridge/Start (112.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-20220329110225-2053 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context enable-default-cni-20220329110225-2053 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context enable-default-cni-20220329110225-2053 replace --force -f testdata/netcat-deployment.yaml: (1.876158303s)
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-xlv6j" [2f4f93c8-b52f-4a5e-bc0f-236d8c6d8beb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-xlv6j" [2f4f93c8-b52f-4a5e-bc0f-236d8c6d8beb] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.012660011s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220329110225-2053 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:182: (dbg) Run:  kubectl --context enable-default-cni-20220329110225-2053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (2.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:232: (dbg) Run:  kubectl --context enable-default-cni-20220329110225-2053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:232: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329110225-2053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (135.063768ms)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:232: (dbg) Run:  kubectl --context enable-default-cni-20220329110225-2053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:232: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220329110225-2053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (129.436486ms)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:232: (dbg) Run:  kubectl --context enable-default-cni-20220329110225-2053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (2.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-20220329110225-2053 "pgrep -a kubelet"
E0329 11:22:18.828610    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329110225-2053/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context bridge-20220329110225-2053 replace --force -f testdata/netcat-deployment.yaml
E0329 11:22:20.108850    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329110225-2053/client.crt: no such file or directory
E0329 11:22:20.418841    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
net_test.go:132: (dbg) Done: kubectl --context bridge-20220329110225-2053 replace --force -f testdata/netcat-deployment.yaml: (1.942144852s)
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-j2tdq" [d267cbc8-364c-4ba9-ad36-a9ae84143d84] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0329 11:22:22.678490    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329110225-2053/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:343: "netcat-668db85669-j2tdq" [d267cbc8-364c-4ba9-ad36-a9ae84143d84] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.012893297s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (104.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-20220329110225-2053 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 
E0329 11:22:27.803619    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329110225-2053/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-20220329110225-2053 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : (1m44.515153703s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (104.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220329110225-2053 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:182: (dbg) Run:  kubectl --context bridge-20220329110225-2053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:232: (dbg) Run:  kubectl --context bridge-20220329110225-2053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (138.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-20220329110226-2053 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 
E0329 11:22:58.540256    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329110225-2053/client.crt: no such file or directory
E0329 11:23:39.503454    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329110225-2053/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p calico-20220329110226-2053 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : (2m18.875809745s)
--- PASS: TestNetworkPlugins/group/calico/Start (138.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-20220329110225-2053 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context kubenet-20220329110225-2053 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context kubenet-20220329110225-2053 replace --force -f testdata/netcat-deployment.yaml: (1.952578854s)
net_test.go:146: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-pk7z6" [03b93dc7-aeef-443f-b25c-b8e3854ab7f8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-pk7z6" [03b93dc7-aeef-443f-b25c-b8e3854ab7f8] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.012338939s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220329110225-2053 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:182: (dbg) Run:  kubectl --context kubenet-20220329110225-2053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:232: (dbg) Run:  kubectl --context kubenet-20220329110225-2053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (91.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-20220329110226-2053 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 
E0329 11:24:38.054587    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kindnet-20220329110226-2053/client.crt: no such file or directory
E0329 11:24:38.060892    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kindnet-20220329110226-2053/client.crt: no such file or directory
E0329 11:24:38.071017    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kindnet-20220329110226-2053/client.crt: no such file or directory
E0329 11:24:38.092023    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kindnet-20220329110226-2053/client.crt: no such file or directory
E0329 11:24:38.139657    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kindnet-20220329110226-2053/client.crt: no such file or directory
E0329 11:24:38.228285    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kindnet-20220329110226-2053/client.crt: no such file or directory
E0329 11:24:38.388853    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kindnet-20220329110226-2053/client.crt: no such file or directory
E0329 11:24:38.709350    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kindnet-20220329110226-2053/client.crt: no such file or directory
E0329 11:24:39.353299    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kindnet-20220329110226-2053/client.crt: no such file or directory
E0329 11:24:40.636944    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kindnet-20220329110226-2053/client.crt: no such file or directory
E0329 11:24:43.197409    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kindnet-20220329110226-2053/client.crt: no such file or directory
E0329 11:24:47.768602    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329110226-2053/client.crt: no such file or directory
E0329 11:24:47.774049    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329110226-2053/client.crt: no such file or directory
E0329 11:24:47.786764    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329110226-2053/client.crt: no such file or directory
E0329 11:24:47.812722    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329110226-2053/client.crt: no such file or directory
E0329 11:24:47.853327    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329110226-2053/client.crt: no such file or directory
E0329 11:24:47.936753    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329110226-2053/client.crt: no such file or directory
E0329 11:24:48.103263    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329110226-2053/client.crt: no such file or directory
E0329 11:24:48.317507    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kindnet-20220329110226-2053/client.crt: no such file or directory
E0329 11:24:48.428251    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329110226-2053/client.crt: no such file or directory
E0329 11:24:49.078320    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329110226-2053/client.crt: no such file or directory
E0329 11:24:50.361329    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329110226-2053/client.crt: no such file or directory
E0329 11:24:52.928323    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329110226-2053/client.crt: no such file or directory
E0329 11:24:58.053308    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329110226-2053/client.crt: no such file or directory
E0329 11:24:58.558667    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kindnet-20220329110226-2053/client.crt: no such file or directory
E0329 11:25:01.428691    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329110225-2053/client.crt: no such file or directory
E0329 11:25:04.242984    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p cilium-20220329110226-2053 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : (1m31.549503385s)
--- PASS: TestNetworkPlugins/group/cilium/Start (91.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:343: "calico-node-sckx7" [605488f3-0314-480d-92ed-e9948926db61] Running
E0329 11:25:08.303258    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329110226-2053/client.crt: no such file or directory
net_test.go:107: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.023738111s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-20220329110226-2053 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context calico-20220329110226-2053 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context calico-20220329110226-2053 replace --force -f testdata/netcat-deployment.yaml: (1.944876121s)
net_test.go:146: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-5xk7x" [b08195e4-5d3d-4cde-a1ff-9d3f1355400c] Pending
helpers_test.go:343: "netcat-668db85669-5xk7x" [b08195e4-5d3d-4cde-a1ff-9d3f1355400c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-5xk7x" [b08195e4-5d3d-4cde-a1ff-9d3f1355400c] Running
E0329 11:25:19.042893    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kindnet-20220329110226-2053/client.crt: no such file or directory
net_test.go:146: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.006646258s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:163: (dbg) Run:  kubectl --context calico-20220329110226-2053 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:182: (dbg) Run:  kubectl --context calico-20220329110226-2053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:232: (dbg) Run:  kubectl --context calico-20220329110226-2053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:343: "cilium-mr2xn" [5b95b1b0-1170-4c93-81b9-e423ebe3c2fc] Running
E0329 11:26:00.009226    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kindnet-20220329110226-2053/client.crt: no such file or directory
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.016294805s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cilium-20220329110226-2053 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (12.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context cilium-20220329110226-2053 replace --force -f testdata/netcat-deployment.yaml
E0329 11:26:07.628269    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
net_test.go:132: (dbg) Done: kubectl --context cilium-20220329110226-2053 replace --force -f testdata/netcat-deployment.yaml: (2.35696933s)
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-6w9qm" [37b9bc80-7391-4198-9cb0-fdd33ae06982] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0329 11:26:09.753265    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329110226-2053/client.crt: no such file or directory
helpers_test.go:343: "netcat-668db85669-6w9qm" [37b9bc80-7391-4198-9cb0-fdd33ae06982] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 10.064769329s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (12.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:163: (dbg) Run:  kubectl --context cilium-20220329110226-2053 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:182: (dbg) Run:  kubectl --context cilium-20220329110226-2053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:232: (dbg) Run:  kubectl --context cilium-20220329110226-2053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (161.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220329112628-2053 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0329 11:27:01.954607    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:01.960581    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:01.970667    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:01.991453    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:02.036591    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:02.116764    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:02.286588    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:02.607112    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:03.253137    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:04.536626    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:07.146552    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:12.274721    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:17.510842    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:21.207679    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/bridge-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:21.213683    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/bridge-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:21.228034    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/bridge-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:21.253039    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/bridge-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:21.303068    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/bridge-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:21.387248    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/bridge-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:21.553503    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/bridge-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:21.878052    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/bridge-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:21.936603    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kindnet-20220329110226-2053/client.crt: no such file or directory
E0329 11:27:22.516449    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:22.518152    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/bridge-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:23.803083    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/bridge-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:26.363381    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/bridge-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:31.486583    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/bridge-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:31.678258    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329110226-2053/client.crt: no such file or directory
E0329 11:27:41.736631    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/bridge-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:43.003101    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329110225-2053/client.crt: no such file or directory
E0329 11:27:45.278122    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329110225-2053/client.crt: no such file or directory
E0329 11:28:02.217217    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/bridge-20220329110225-2053/client.crt: no such file or directory
E0329 11:28:23.963270    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329110225-2053/client.crt: no such file or directory
E0329 11:28:43.186461    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/bridge-20220329110225-2053/client.crt: no such file or directory
start_stop_delete_test.go:171: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-20220329112628-2053 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: (2m41.042437927s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (161.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20220329112628-2053 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) Done: kubectl --context old-k8s-version-20220329112628-2053 create -f testdata/busybox.yaml: (2.069743382s)
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [62bde8a6-6dd4-473f-bcbd-b19d4847fbe9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0329 11:29:11.857230    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kubenet-20220329110225-2053/client.crt: no such file or directory
E0329 11:29:11.862363    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kubenet-20220329110225-2053/client.crt: no such file or directory
E0329 11:29:11.872618    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kubenet-20220329110225-2053/client.crt: no such file or directory
E0329 11:29:11.892714    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kubenet-20220329110225-2053/client.crt: no such file or directory
E0329 11:29:11.936505    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kubenet-20220329110225-2053/client.crt: no such file or directory
E0329 11:29:12.016906    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kubenet-20220329110225-2053/client.crt: no such file or directory
E0329 11:29:12.186401    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kubenet-20220329110225-2053/client.crt: no such file or directory
E0329 11:29:12.507031    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kubenet-20220329110225-2053/client.crt: no such file or directory
E0329 11:29:13.153169    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kubenet-20220329110225-2053/client.crt: no such file or directory
helpers_test.go:343: "busybox" [62bde8a6-6dd4-473f-bcbd-b19d4847fbe9] Running
E0329 11:29:14.437066    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kubenet-20220329110225-2053/client.crt: no such file or directory
E0329 11:29:17.004807    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kubenet-20220329110225-2053/client.crt: no such file or directory
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.014286103s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20220329112628-2053 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220329112628-2053 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context old-k8s-version-20220329112628-2053 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (18.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-20220329112628-2053 --alsologtostderr -v=3
E0329 11:29:22.125438    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kubenet-20220329110225-2053/client.crt: no such file or directory
E0329 11:29:23.394153    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
E0329 11:29:32.365745    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kubenet-20220329110225-2053/client.crt: no such file or directory
E0329 11:29:38.058955    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kindnet-20220329110226-2053/client.crt: no such file or directory
start_stop_delete_test.go:213: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-20220329112628-2053 --alsologtostderr -v=3: (18.235721852s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (18.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220329112628-2053 -n old-k8s-version-20220329112628-2053
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220329112628-2053 -n old-k8s-version-20220329112628-2053: exit status 7 (156.395427ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-20220329112628-2053 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (57.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220329112628-2053 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0329 11:29:45.886433    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329110225-2053/client.crt: no such file or directory
E0329 11:29:47.767799    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329110226-2053/client.crt: no such file or directory
E0329 11:29:52.852982    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kubenet-20220329110225-2053/client.crt: no such file or directory
E0329 11:30:04.237271    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
E0329 11:30:05.110014    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/bridge-20220329110225-2053/client.crt: no such file or directory
E0329 11:30:05.786469    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kindnet-20220329110226-2053/client.crt: no such file or directory
E0329 11:30:05.886471    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329110226-2053/client.crt: no such file or directory
E0329 11:30:05.891538    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329110226-2053/client.crt: no such file or directory
E0329 11:30:05.903009    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329110226-2053/client.crt: no such file or directory
E0329 11:30:05.927900    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329110226-2053/client.crt: no such file or directory
E0329 11:30:05.977878    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329110226-2053/client.crt: no such file or directory
E0329 11:30:06.061482    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329110226-2053/client.crt: no such file or directory
E0329 11:30:06.227917    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329110226-2053/client.crt: no such file or directory
E0329 11:30:06.552897    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329110226-2053/client.crt: no such file or directory
E0329 11:30:07.202883    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329110226-2053/client.crt: no such file or directory
E0329 11:30:08.486353    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329110226-2053/client.crt: no such file or directory
E0329 11:30:11.052848    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329110226-2053/client.crt: no such file or directory
E0329 11:30:15.527845    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329110226-2053/client.crt: no such file or directory
E0329 11:30:16.180308    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329110226-2053/client.crt: no such file or directory
E0329 11:30:26.427875    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329110226-2053/client.crt: no such file or directory
E0329 11:30:33.813651    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kubenet-20220329110225-2053/client.crt: no such file or directory
start_stop_delete_test.go:241: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-20220329112628-2053 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: (56.868556365s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220329112628-2053 -n old-k8s-version-20220329112628-2053
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (57.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (18.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-766959b846-6kpkj" [1703ee02-2a13-420a-b13e-a69b337ba498] Pending
helpers_test.go:343: "kubernetes-dashboard-766959b846-6kpkj" [1703ee02-2a13-420a-b13e-a69b337ba498] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0329 11:30:46.913682    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329110226-2053/client.crt: no such file or directory
helpers_test.go:343: "kubernetes-dashboard-766959b846-6kpkj" [1703ee02-2a13-420a-b13e-a69b337ba498] Running
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 18.015824913s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (18.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (7.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-766959b846-6kpkj" [1703ee02-2a13-420a-b13e-a69b337ba498] Running
E0329 11:30:57.286838    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007437289s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20220329112628-2053 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E0329 11:30:59.924656    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329110226-2053/client.crt: no such file or directory
E0329 11:30:59.929910    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329110226-2053/client.crt: no such file or directory
E0329 11:30:59.940064    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329110226-2053/client.crt: no such file or directory
E0329 11:30:59.963472    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329110226-2053/client.crt: no such file or directory
E0329 11:31:00.011688    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329110226-2053/client.crt: no such file or directory
E0329 11:31:00.097321    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329110226-2053/client.crt: no such file or directory
E0329 11:31:00.259408    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329110226-2053/client.crt: no such file or directory
E0329 11:31:00.580577    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329110226-2053/client.crt: no such file or directory
E0329 11:31:01.223714    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329110226-2053/client.crt: no such file or directory
start_stop_delete_test.go:276: (dbg) Done: kubectl --context old-k8s-version-20220329112628-2053 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (2.096182846s)
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (7.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p old-k8s-version-20220329112628-2053 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-20220329112628-2053 --alsologtostderr -v=1
E0329 11:31:02.506222    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329110226-2053/client.crt: no such file or directory
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220329112628-2053 -n old-k8s-version-20220329112628-2053
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220329112628-2053 -n old-k8s-version-20220329112628-2053: exit status 2 (643.119366ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220329112628-2053 -n old-k8s-version-20220329112628-2053
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220329112628-2053 -n old-k8s-version-20220329112628-2053: exit status 2 (681.075928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-20220329112628-2053 --alsologtostderr -v=1
E0329 11:31:05.067373    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329110226-2053/client.crt: no such file or directory
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220329112628-2053 -n old-k8s-version-20220329112628-2053
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220329112628-2053 -n old-k8s-version-20220329112628-2053
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (105s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220329113122-2053 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.6-rc.0
E0329 11:31:27.861913    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329110226-2053/client.crt: no such file or directory
E0329 11:31:40.921349    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329110226-2053/client.crt: no such file or directory
E0329 11:31:55.725618    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kubenet-20220329110225-2053/client.crt: no such file or directory
E0329 11:32:01.940029    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329110225-2053/client.crt: no such file or directory
E0329 11:32:17.504002    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329110225-2053/client.crt: no such file or directory
E0329 11:32:21.194265    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/bridge-20220329110225-2053/client.crt: no such file or directory
E0329 11:32:21.889674    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329110226-2053/client.crt: no such file or directory
E0329 11:32:29.723242    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329110225-2053/client.crt: no such file or directory
E0329 11:32:30.741720    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
E0329 11:32:48.940829    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/bridge-20220329110225-2053/client.crt: no such file or directory
E0329 11:32:49.785256    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329110226-2053/client.crt: no such file or directory
start_stop_delete_test.go:171: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220329113122-2053 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.6-rc.0: (1m44.997695716s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (105.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20220329113122-2053 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) Done: kubectl --context no-preload-20220329113122-2053 create -f testdata/busybox.yaml: (2.020908231s)
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [fb66de3f-40d9-460a-a343-8d48e3936440] Pending
helpers_test.go:343: "busybox" [fb66de3f-40d9-460a-a343-8d48e3936440] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [fb66de3f-40d9-460a-a343-8d48e3936440] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.022964844s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20220329113122-2053 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-20220329113122-2053 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context no-preload-20220329113122-2053 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (17.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-20220329113122-2053 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-20220329113122-2053 --alsologtostderr -v=3: (17.514299892s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (17.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220329113122-2053 -n no-preload-20220329113122-2053
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220329113122-2053 -n no-preload-20220329113122-2053: exit status 7 (166.72972ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-20220329113122-2053 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (377.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220329113122-2053 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.6-rc.0
E0329 11:33:43.809619    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329110226-2053/client.crt: no such file or directory
E0329 11:34:11.569384    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329112628-2053/client.crt: no such file or directory
E0329 11:34:11.576995    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329112628-2053/client.crt: no such file or directory
E0329 11:34:11.587154    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329112628-2053/client.crt: no such file or directory
E0329 11:34:11.608544    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329112628-2053/client.crt: no such file or directory
E0329 11:34:11.658383    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329112628-2053/client.crt: no such file or directory
E0329 11:34:11.738605    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329112628-2053/client.crt: no such file or directory
E0329 11:34:11.852858    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kubenet-20220329110225-2053/client.crt: no such file or directory
E0329 11:34:11.901235    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329112628-2053/client.crt: no such file or directory
E0329 11:34:12.225612    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329112628-2053/client.crt: no such file or directory
E0329 11:34:12.866043    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329112628-2053/client.crt: no such file or directory
E0329 11:34:14.146178    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329112628-2053/client.crt: no such file or directory
E0329 11:34:16.712428    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329112628-2053/client.crt: no such file or directory
E0329 11:34:21.836534    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329112628-2053/client.crt: no such file or directory
E0329 11:34:23.381259    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
E0329 11:34:32.086655    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329112628-2053/client.crt: no such file or directory
E0329 11:34:38.040336    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kindnet-20220329110226-2053/client.crt: no such file or directory
E0329 11:34:39.569926    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kubenet-20220329110225-2053/client.crt: no such file or directory
E0329 11:34:47.761582    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329110226-2053/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220329113122-2053 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.6-rc.0: (6m16.51764394s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220329113122-2053 -n no-preload-20220329113122-2053
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (377.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (349.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220329113512-2053 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.5
E0329 11:35:33.536407    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329112628-2053/client.crt: no such file or directory
E0329 11:35:33.625015    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329110226-2053/client.crt: no such file or directory
E0329 11:35:57.286388    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
E0329 11:35:59.921176    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329110226-2053/client.crt: no such file or directory
E0329 11:36:07.615846    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
E0329 11:36:27.392461    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
E0329 11:36:27.649438    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329110226-2053/client.crt: no such file or directory
E0329 11:36:55.456592    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329112628-2053/client.crt: no such file or directory
E0329 11:37:01.936059    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329110225-2053/client.crt: no such file or directory
E0329 11:37:17.497022    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329110225-2053/client.crt: no such file or directory
E0329 11:37:21.193580    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/bridge-20220329110225-2053/client.crt: no such file or directory
E0329 11:38:40.631225    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329110225-2053/client.crt: no such file or directory
E0329 11:39:00.410728    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
E0329 11:39:11.575863    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329112628-2053/client.crt: no such file or directory
E0329 11:39:11.844788    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kubenet-20220329110225-2053/client.crt: no such file or directory
E0329 11:39:23.378309    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
E0329 11:39:38.040485    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kindnet-20220329110226-2053/client.crt: no such file or directory
E0329 11:39:39.297846    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329112628-2053/client.crt: no such file or directory
E0329 11:39:47.760628    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329110226-2053/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220329113512-2053 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.5: (5m49.575708458s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (349.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-zph4f" [519bf540-cfa1-485e-9ad4-550c931c45e6] Running
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016734497s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-zph4f" [519bf540-cfa1-485e-9ad4-550c931c45e6] Running
E0329 11:40:04.221262    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008274127s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context no-preload-20220329113122-2053 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E0329 11:40:05.871491    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329110226-2053/client.crt: no such file or directory
start_stop_delete_test.go:276: (dbg) Done: kubectl --context no-preload-20220329113122-2053 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.853185155s)
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-20220329113122-2053 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-20220329113122-2053 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Done: out/minikube-darwin-amd64 pause -p no-preload-20220329113122-2053 --alsologtostderr -v=1: (1.0058542s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220329113122-2053 -n no-preload-20220329113122-2053
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220329113122-2053 -n no-preload-20220329113122-2053: exit status 2 (648.95851ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220329113122-2053 -n no-preload-20220329113122-2053
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220329113122-2053 -n no-preload-20220329113122-2053: exit status 2 (715.019151ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-20220329113122-2053 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220329113122-2053 -n no-preload-20220329113122-2053
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220329113122-2053 -n no-preload-20220329113122-2053
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (335.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220329114030-2053 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.5
E0329 11:40:57.276192    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
E0329 11:40:59.917664    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329110226-2053/client.crt: no such file or directory
E0329 11:41:01.139262    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kindnet-20220329110226-2053/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220329114030-2053 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.5: (5m35.097036904s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (335.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20220329113512-2053 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) Done: kubectl --context embed-certs-20220329113512-2053 create -f testdata/busybox.yaml: (1.886747259s)
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [44802150-61d6-4746-bb3b-b75f25c2a1a2] Pending
helpers_test.go:343: "busybox" [44802150-61d6-4746-bb3b-b75f25c2a1a2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [44802150-61d6-4746-bb3b-b75f25c2a1a2] Running
E0329 11:41:07.606254    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
E0329 11:41:10.874014    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329110226-2053/client.crt: no such file or directory
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.011886069s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20220329113512-2053 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-20220329113512-2053 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context embed-certs-20220329113512-2053 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (15.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-20220329113512-2053 --alsologtostderr -v=3
start_stop_delete_test.go:213: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-20220329113512-2053 --alsologtostderr -v=3: (15.951858489s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (15.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220329113512-2053 -n embed-certs-20220329113512-2053
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220329113512-2053 -n embed-certs-20220329113512-2053: exit status 7 (186.575852ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-20220329113512-2053 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (590.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220329113512-2053 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.5
E0329 11:42:01.940719    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329110225-2053/client.crt: no such file or directory
E0329 11:42:17.497636    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329110225-2053/client.crt: no such file or directory
E0329 11:42:21.192515    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/bridge-20220329110225-2053/client.crt: no such file or directory
E0329 11:42:26.472674    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
E0329 11:43:09.618654    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329113122-2053/client.crt: no such file or directory
E0329 11:43:09.624053    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329113122-2053/client.crt: no such file or directory
E0329 11:43:09.634260    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329113122-2053/client.crt: no such file or directory
E0329 11:43:09.660658    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329113122-2053/client.crt: no such file or directory
E0329 11:43:09.702355    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329113122-2053/client.crt: no such file or directory
E0329 11:43:09.786920    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329113122-2053/client.crt: no such file or directory
E0329 11:43:09.947189    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329113122-2053/client.crt: no such file or directory
E0329 11:43:10.272772    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329113122-2053/client.crt: no such file or directory
E0329 11:43:10.916636    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329113122-2053/client.crt: no such file or directory
E0329 11:43:12.196961    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329113122-2053/client.crt: no such file or directory
E0329 11:43:14.757602    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329113122-2053/client.crt: no such file or directory
E0329 11:43:19.883945    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329113122-2053/client.crt: no such file or directory
E0329 11:43:25.082149    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329110225-2053/client.crt: no such file or directory
E0329 11:43:30.125472    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329113122-2053/client.crt: no such file or directory
E0329 11:43:44.301204    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/bridge-20220329110225-2053/client.crt: no such file or directory
E0329 11:43:50.607550    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329113122-2053/client.crt: no such file or directory
E0329 11:44:11.567839    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329112628-2053/client.crt: no such file or directory
E0329 11:44:11.844515    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kubenet-20220329110225-2053/client.crt: no such file or directory
E0329 11:44:23.371853    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
E0329 11:44:31.568191    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329113122-2053/client.crt: no such file or directory
E0329 11:44:38.048637    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kindnet-20220329110226-2053/client.crt: no such file or directory
E0329 11:44:47.756309    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329110226-2053/client.crt: no such file or directory
E0329 11:45:04.220362    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
E0329 11:45:05.868918    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329110226-2053/client.crt: no such file or directory
E0329 11:45:34.933290    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kubenet-20220329110225-2053/client.crt: no such file or directory
E0329 11:45:53.742070    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329113122-2053/client.crt: no such file or directory
E0329 11:45:57.529340    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
E0329 11:46:00.159186    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329110226-2053/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220329113512-2053 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.5: (9m50.332139129s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220329113512-2053 -n embed-certs-20220329113512-2053
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (590.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (11.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20220329114030-2053 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) Done: kubectl --context default-k8s-different-port-20220329114030-2053 create -f testdata/busybox.yaml: (1.935420253s)
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [5ce6981f-3df8-485a-b484-8efef69b3c6e] Pending
E0329 11:46:07.854109    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
helpers_test.go:343: "busybox" [5ce6981f-3df8-485a-b484-8efef69b3c6e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [5ce6981f-3df8-485a-b484-8efef69b3c6e] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 9.013099576s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20220329114030-2053 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (11.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-different-port-20220329114030-2053 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context default-k8s-different-port-20220329114030-2053 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (18.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220329114030-2053 --alsologtostderr -v=3
E0329 11:46:29.236581    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329110226-2053/client.crt: no such file or directory
start_stop_delete_test.go:213: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220329114030-2053 --alsologtostderr -v=3: (18.231239125s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (18.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220329114030-2053 -n default-k8s-different-port-20220329114030-2053
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220329114030-2053 -n default-k8s-different-port-20220329114030-2053: exit status 7 (159.309296ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-different-port-20220329114030-2053 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (596.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220329114030-2053 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.5
E0329 11:47:02.185882    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329110225-2053/client.crt: no such file or directory
E0329 11:47:17.745429    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329110225-2053/client.crt: no such file or directory
E0329 11:47:21.443184    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/bridge-20220329110225-2053/client.crt: no such file or directory
E0329 11:47:23.268255    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329110226-2053/client.crt: no such file or directory
E0329 11:48:09.870648    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329113122-2053/client.crt: no such file or directory
E0329 11:48:37.587077    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329113122-2053/client.crt: no such file or directory
E0329 11:49:10.993585    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory
E0329 11:49:11.820734    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329112628-2053/client.crt: no such file or directory
E0329 11:49:12.098009    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kubenet-20220329110225-2053/client.crt: no such file or directory
E0329 11:49:23.633650    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
E0329 11:49:38.293765    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kindnet-20220329110226-2053/client.crt: no such file or directory
E0329 11:49:48.010644    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/false-20220329110226-2053/client.crt: no such file or directory
E0329 11:50:04.482777    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
E0329 11:50:06.123575    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/calico-20220329110226-2053/client.crt: no such file or directory
E0329 11:50:34.915953    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329112628-2053/client.crt: no such file or directory
E0329 11:50:57.532501    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/skaffold-20220329105920-2053/client.crt: no such file or directory
E0329 11:51:00.175075    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/cilium-20220329110226-2053/client.crt: no such file or directory
E0329 11:51:07.863156    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/functional-20220329101744-2053/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220329114030-2053 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.5: (9m56.129741708s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220329114030-2053 -n default-k8s-different-port-20220329114030-2053
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (596.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-zb484" [5eda2244-bc3c-490a-ab25-899cf6ccbcc2] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012665127s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-zb484" [5eda2244-bc3c-490a-ab25-899cf6ccbcc2] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006246946s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context embed-certs-20220329113512-2053 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Done: kubectl --context embed-certs-20220329113512-2053 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.887433261s)
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-20220329113512-2053 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-20220329113512-2053 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220329113512-2053 -n embed-certs-20220329113512-2053
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220329113512-2053 -n embed-certs-20220329113512-2053: exit status 2 (651.372914ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220329113512-2053 -n embed-certs-20220329113512-2053
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220329113512-2053 -n embed-certs-20220329113512-2053: exit status 2 (668.040002ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-20220329113512-2053 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220329113512-2053 -n embed-certs-20220329113512-2053
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220329113512-2053 -n embed-certs-20220329113512-2053
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (70.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220329115153-2053 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.6-rc.0
E0329 11:52:02.191798    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/enable-default-cni-20220329110225-2053/client.crt: no such file or directory
E0329 11:52:17.756929    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/auto-20220329110225-2053/client.crt: no such file or directory
E0329 11:52:21.458456    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/bridge-20220329110225-2053/client.crt: no such file or directory
start_stop_delete_test.go:171: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220329115153-2053 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.6-rc.0: (1m10.461219834s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (70.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-20220329115153-2053 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:196: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (19.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-20220329115153-2053 --alsologtostderr -v=3
E0329 11:53:07.659073    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/ingress-addon-legacy-20220329102231-2053/client.crt: no such file or directory
E0329 11:53:09.884588    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/no-preload-20220329113122-2053/client.crt: no such file or directory
start_stop_delete_test.go:213: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-20220329115153-2053 --alsologtostderr -v=3: (19.211995759s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (19.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220329115153-2053 -n newest-cni-20220329115153-2053
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220329115153-2053 -n newest-cni-20220329115153-2053: exit status 7 (165.562355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-20220329115153-2053 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (57.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220329115153-2053 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.6-rc.0
E0329 11:54:11.829789    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/old-k8s-version-20220329112628-2053/client.crt: no such file or directory
E0329 11:54:12.107896    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/kubenet-20220329110225-2053/client.crt: no such file or directory
start_stop_delete_test.go:241: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220329115153-2053 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.6-rc.0: (57.152063963s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220329115153-2053 -n newest-cni-20220329115153-2053
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (57.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:269: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-20220329115153-2053 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-20220329115153-2053 --alsologtostderr -v=1
E0329 11:54:23.641123    2053 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13730-908-eb19396baacb27bcde6912a0ea5aa6419fc16109/.minikube/profiles/addons-20220329101201-2053/client.crt: no such file or directory
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220329115153-2053 -n newest-cni-20220329115153-2053
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220329115153-2053 -n newest-cni-20220329115153-2053: exit status 2 (650.759449ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220329115153-2053 -n newest-cni-20220329115153-2053
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220329115153-2053 -n newest-cni-20220329115153-2053: exit status 2 (691.879944ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-20220329115153-2053 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220329115153-2053 -n newest-cni-20220329115153-2053
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220329115153-2053 -n newest-cni-20220329115153-2053
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-9wzsp" [d829aafd-2acc-485c-9c90-024938852ff3] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.020657601s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-9wzsp" [d829aafd-2acc-485c-9c90-024938852ff3] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010558744s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20220329114030-2053 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Done: kubectl --context default-k8s-different-port-20220329114030-2053 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.91597485s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20220329114030-2053 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (4.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-different-port-20220329114030-2053 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220329114030-2053 -n default-k8s-different-port-20220329114030-2053
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220329114030-2053 -n default-k8s-different-port-20220329114030-2053: exit status 2 (627.368545ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220329114030-2053 -n default-k8s-different-port-20220329114030-2053
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220329114030-2053 -n default-k8s-different-port-20220329114030-2053: exit status 2 (619.777123ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-different-port-20220329114030-2053 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220329114030-2053 -n default-k8s-different-port-20220329114030-2053
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220329114030-2053 -n default-k8s-different-port-20220329114030-2053
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (4.32s)

                                                
                                    

Test skip (20/299)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.5/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.5/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.6-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.6-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:281: registry stabilized in 13.561224ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:343: "registry-d7998" [855ebbd5-a316-4a62-8d2e-554f091dc46c] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.009566495s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:343: "registry-proxy-qpvth" [bb763bf3-1d3f-4860-9b88-2143776091a3] Running
addons_test.go:286: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.031861501s
addons_test.go:291: (dbg) Run:  kubectl --context addons-20220329101201-2053 delete po -l run=registry-test --now
addons_test.go:296: (dbg) Run:  kubectl --context addons-20220329101201-2053 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:296: (dbg) Done: kubectl --context addons-20220329101201-2053 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.969143212s)
addons_test.go:306: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (13.15s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (13.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:163: (dbg) Run:  kubectl --context addons-20220329101201-2053 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Run:  kubectl --context addons-20220329101201-2053 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:196: (dbg) Run:  kubectl --context addons-20220329101201-2053 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [1e5c1fc2-5c97-47ec-8193-2f7df9ea929f] Pending
helpers_test.go:343: "nginx" [1e5c1fc2-5c97-47ec-8193-2f7df9ea929f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [1e5c1fc2-5c97-47ec-8193-2f7df9ea929f] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.012326164s
addons_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220329101201-2053 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:233: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (13.74s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:449: Skipping Olm addon till images are fixed
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) Run:  kubectl --context functional-20220329101744-2053 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1575: (dbg) Run:  kubectl --context functional-20220329101744-2053 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1580: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:343: "hello-node-connect-74cf8bc446-h8cmr" [6f7e26de-e3fc-4b93-b5f8-23f9a4f0b7fb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:343: "hello-node-connect-74cf8bc446-h8cmr" [6f7e26de-e3fc-4b93-b5f8-23f9a4f0b7fb] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1580: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.01366778s
functional_test.go:1586: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (9.14s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:547: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:35: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (32.67s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:163: (dbg) Run:  kubectl --context ingress-addon-legacy-20220329102231-2053 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:163: (dbg) Done: kubectl --context ingress-addon-legacy-20220329102231-2053 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (10.040603333s)
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220329102231-2053 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20220329102231-2053 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (204.885451ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.109.148.11:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220329102231-2053 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20220329102231-2053 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (158.084835ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.109.148.11:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220329102231-2053 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20220329102231-2053 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (176.753266ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.109.148.11:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220329102231-2053 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20220329102231-2053 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (158.882392ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.109.148.11:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220329102231-2053 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:196: (dbg) Run:  kubectl --context ingress-addon-legacy-20220329102231-2053 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [ab357036-9f47-4c38-a00b-af2e5ca410d2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:343: "nginx" [ab357036-9f47-4c38-a00b-af2e5ca410d2] Running
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.011626579s
addons_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220329102231-2053 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:233: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestIngressAddonLegacy/serial/ValidateIngressAddons (32.67s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:77: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20220329110225-2053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-20220329110225-2053
--- SKIP: TestNetworkPlugins/group/flannel (0.86s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20220329114029-2053" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-20220329114029-2053
--- SKIP: TestStartStop/group/disable-driver-mounts (0.90s)

                                                
                                    
Copied to clipboard